Science topics: Information Systems (Business Informatics)Information Analysis
Science topic
Information Analysis - Science topic
Explore the latest questions and answers in Information Analysis, and find Information Analysis experts.
Questions related to Information Analysis
How should the architecture of an effective computerised platform for detecting fakenews and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
The scale of the development of disinformation on the Internet including, among other things, fakenews has been growing in recent years mainly in social media. Disinformation is mainly developing on social media sites that are popular among young people, children and teenagers. The growing scale of disinformation is particularly socially damaging in view of the key objective of its pursuit by cybercriminals and certain organisations using, for example, the technique of publishing posts and banners using fake profiles of fictitious Internet users containing fakenews. The aim is to try to influence public opinion in society, to shape the general social awareness of citizens, to influence the assessment of the activities of specific policies of the government, national and/or international organisations, public or other institutions, to influence the ratings, credibility, reputation, recognition of specific institutions, companies, enterprises, their product and service offerings, individuals, etc., to influence the results of parliamentary, presidential and other elections, etc. In addition to this, the scale of cybercriminal activity and the improvement of cyber security techniques have also been growing in parallel on the Internet in recent years. Therefore, as part of improving techniques to reduce the scale of disinformation spread deliberately by specific national and/or international organisations, computerised platforms are being built to detect fake news and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies. Since cybercriminals and organisations generating disinformation use new Industry 4.0 technologies in the creation of fake profiles on popular social networks, new information technologies, Industry 4.0, including but not limited to Big Data Analytics, artificial intelligence, deep learning, machine learning, etc., should also be used to reduce the scale of such harmful activities to citizens.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the architecture of an effective computerised platform for detecting factoids and other forms of disinformation on the Internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
And what do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

WHAT IS INFORMATION? WHAT IS ITS CAUSAL (OR NON-CAUSAL?) CORE? A Discussion. Raphael Neelamkavil, Ph.D. (Quantum Causality), Dr. phil. (Gravitational Coalescence Cosmology)
Questions Addressed: What is information? Is it the same as the energy or matter-energy that is basic to it? Is it merely what is being communicated via energy and different from the energy? If it is different, is it causally or non-causally different or a-causally? Is it something purely physical, if it is based on and/or identifiable to energy? What is the symbolic nature of information? How does information get symbolized? Does it have a causal basis and core? If yes, how to systematize it? Can the symbolic aspect of information be systematized? Is information merely the symbolic core being transmitted via energy? If so, how to connect systematically and systemically the causal core and the symbolic core of languages? If language is a symbolizing production based on consciousness and life – both human and other – and if the symbolic aspect may be termed the a-causal but formatively causal core or even periphery of it, can language possess a non-causal aspect-core or merely a causal and an a-causal aspect-cores? If any of these is the case, what are the founding aspects of language and information within consciousness and life? These are the direct questions involved in the present work. I shall address these and the following more general but directly related questions together in the proposed work.
From a general viewpoint, the causal question engenders a multitude of other associated paradoxical questions at the theoretical foundations of the sciences. What are the foundations of all sciences and philosophy together, upon which the concepts of information, language, consciousness which is the origin of language, and the very existent matter-energy processes are based? Are there commonalities between information, language, consciousness, and existent matter-energy processes? Could a grounding of information, language, etc. be helped if their common conceptual base on To Be can be unearthed, and their consciousness-and-life-related and matter-energy-related aspects may be discovered? How to connect them to the causal (or non-causal?) core of all matter-energy? These are questions more foundational than the former set.
Addressing and resolving the foundational question of the apriority of Causality is, in my opinion, the possibly most fundamental solution. Hence, addressing these is the first task. This should be done in such a manner that the rest should follow axiomatically and thus naturally. Hence, the causal question is to be formulated and then the possible ways of reflection of the same in mental concepts that may axiomatically be demonstrated to follow suit. This task appears to be over-ambitious. But I would attempt to demonstrate as rationally as possible that the connections are strongly based on the very implications of To Be. As regards language, I deal only with verbal, nominal, and attributive (adverbs and adjectives) words, because (1) including other parts of speech would go beyond more than double the number of pages and (2) these other parts of speech are much more complicated and hence may be thought through and integrated in the mainline theory here, say, in the course of another decade or more!
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning, deep learning, artificial intelligence, ... what's next? Intelligent thinking autonomous robots?
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to technologies learning machines, deep learning, artificial intelligence. Machine learning, machine learning, machine self-learning or machine learning systems are all synonymous terms relating to the field of artificial intelligence with a particular focus on algorithms that can improve themselves, improving automatically through the action of an experience factor within exposure to large data sets. Algorithms operating within the framework of machine learning build a mathematical model of data processing from sample data, called a learning set, in order to make predictions or decisions without being programmed explicitely by a human to do so. Machine learning algorithms are used in a wide variety of applications, such as spam protection, i.e. filtering internet messages for unwanted correspondence, or image recognition, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. Deep learning is a kind of subcategory of machine learning, which involves the creation of deep neural networks, i.e. networks with multiple levels of neurons. Deep learning techniques are designed to improve, among other things, automatic speech processing, image recognition and natural language processing. The structure of deep neural networks consists of multiple layers of artificial neurons. Simple neural networks can be designed manually so that a specific layer detects specific features and performs specific data processing, while learning consists of setting appropriate weights, significance levels, value system for components of specific issues defined on the basis of processing and learning from large amounts of data. In large neural networks, the deep learning process is automated and self-contained to a certain extent. In this situation, the network is not designed to detect specific features, but detects them on the basis of the processing of appropriately labelled data sets. Both such datasets and the operation of neural networks themselves should be prepared by specialists, but the features are already detected by the programme itself. Therefore, large amounts of data can be processed and the network can automatically learn higher-level feature representations, which means that they can detect complex patterns in the input data. In view of the above, deep learning systems are built on Big Data Analytics platforms built in such a way that the deep learning process is performed on a sufficiently large amount of data. Artificial intelligence, denoted by the acronym AI (artificial intelligence), is respectively the 'intelligent', multi-criteria, advanced, automated processing of complex, large amounts of data carried out in a way that alludes to certain characteristics of human intelligence exhibited by thought processes. As such, it is the intelligence exhibited by artificial devices, including certain advanced ICT and Industry 4.0 information technology systems and devices equipped with these technological solutions. The concept of artificial intelligence is contrasted with the concept of natural intelligence, i.e. that which pertains to humans. In view of the above, artificial intelligence thus has two basic meanings. On the one hand, it is a hypothetical intelligence realised through a technical rather than a natural process. On the other hand, it is the name of a technology and a research field of computer science and cognitive science that also draws on the achievements of psychology, neurology, mathematics and philosophy. In computer science and cognitive science, artificial intelligence also refers to the creation of models and programmes that simulate at least partially intelligent behaviour. Artificial intelligence is also considered in the field of philosophy, within which a theory is developed concerning the philosophy of artificial intelligence. In addition, artificial intelligence is also a subject of interest in the social sciences. The main task of research and development work on the development of artificial intelligence technology and its new applications is the construction of machines and computer programmes capable of performing selected functions analogously to those performed by the human mind functioning with the human senses, including processes that do not lend themselves to numerical algorithmisation. Such problems are sometimes referred to as AI-difficult and include such processes as decision-making in the absence of all data, analysis and synthesis of natural languages, logical reasoning also referred to as rational reasoning, automatic proof of assertions, computer logic games e.g. chess, intelligent robots, expert and diagnostic systems, among others. Artificial intelligence can be developed and improved by integrating it with the areas of machine learning, fuzzy logic, computer vision, evolutionary computing, neural networks, robotics and artificial life. Artificial intelligence (AI) technologies have been developing rapidly in recent years, which is determined by its combination with other Industry 4.0 technologies, the use of microprocessors, digital machines and computing devices characterised by their ever-increasing capacity for multi-criteria processing of ever-increasing amounts of data, and the emergence of new fields of application. Recently, the development of artificial intelligence has become a topic of discussion in various media due to the open-access, automated and AI-enabled solution ChatGPT, with which Internet users can have a kind of conversation. The solution is based and learns from a collection of large amounts of data extracted in 2021 from specific data and information resources on the Internet. The development of artificial intelligence applications is so rapid that it is ahead of the process of adapting regulations to the situation. The new applications being developed do not always generate exclusively positive impacts. These potentially negative effects include the potential for the generation of disinformation on the Internet, information crafted using artificial intelligence, not in line with the facts and disseminated on social media sites. This raises a number of questions regarding the development of artificial intelligence and its new applications, the possibilities that will arise in the future under the next generation of artificial intelligence, the possibility of teaching artificial intelligence to think, i.e. to realise artificial thought processes in a manner analogous or similar to the thought processes realised in the human mind.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
The fourth technological revolution currently taking place is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning technologies, deep learning, artificial intelligence, .... what's next? Intelligent thinking autonomous robots?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from e.g. large numbers of websites and social media sites, cloud computing, satellite analytics etc. and artificial intelligence in joint applications for the construction of integrated analytical platforms, it is possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thus significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
Ongoing technological progress is increasing the technical possibilities of both conducting research, collecting and assembling large amounts of research data and their multi-criteria processing using ICT information technologies and Industry 4.0. Before the development of ICT information technologies, IT tools, personal computers, etc. in the second half of the 20th century as part of the 3rd technological revolution, computerised, semi-automated processing of large data sets was very difficult or impossible. As a result, the building of multi-criteria, multi-article, big data and information models of complex macro-process structures, simulation models, forecasting models was limited or practically impossible. However, the technological advances made in the current fourth technological revolution and the development of Industry 4.0 technology have changed a lot in this regard. The current fourth technological revolution is, among other things, a revolution in the improvement of multi-criteria, computerised analytical techniques based on large data sets. Industry 4.0 technologies, including Big Data Analytics technology, are used in multi-criteria processing, analysing large data sets. Artificial Intelligence (AI) can be useful in terms of scaling up the automation of research processes and multi-faceted processing of big data obtained from research.
The technological advances taking place are contributing to the improvement of computerised analytical techniques conducted on increasingly large data sets. The application of the technologies of the fourth technological revolution, including ICT information technologies and Industry 4.0 in the process of conducting multi-criteria analyses and simulation and forecasting models conducted on large sets of information and data increases the efficiency of research and analytical processes. Increasingly, in research conducted within different scientific disciplines and different fields of knowledge, analytical processes are carried out, among others, using computerised analytical tools including Big Data Analytics in conjunction with other Industry 4.0 technologies.
When these analytical tools are augmented with Internet of Things technology, cloud computing and satellite-implemented sensing and monitoring techniques, opportunities arise for real-time, multi-criteria analytics of large areas, e.g. nature, climate and others, conducted using satellite technology. When machine learning technology, deep learning, artificial intelligence, multi-criteria simulation models, digital twins are added to these analytical and research techniques, opportunities arise for creating predictive simulations for multi-factor, complex macro processes realised in real time. Complex, multi-faceted macro processes, the study of which is facilitated by the application of new ICT information technologies and Industry 4.0, include, on the one hand, multi-factorial natural, climatic, ecological, etc. processes and those concerning changes in the state of the environment, environmental pollution, changes in the state of ecosystems, biodiversity, changes in the state of soils in agricultural fields, changes in the state of moisture in forested areas, environmental monitoring, deforestation of areas, etc. caused by civilisation factors. On the other hand, complex, multifaceted macroprocesses whose research processes are improved by the application of new technologies include economic, social, financial, etc. processes in the context of the functioning of entire economies, economic regions, continents or in global terms.
Year on year, due to technological advances in ICT, including the use of new generations of microprocessors characterised by ever-increasing computing power, the possibilities for increasingly efficient, multi-criteria processing of large collections of data and information are growing. Artificial intelligence can be particularly useful for the selective and precise retrieval of specific, defined types of information and data extracted from many selected types of websites and the real-time transfer and processing of this data in database systems organised in cloud computing on Big Data Analytics platforms, which would be accessed by a system managing a built and updated model of a specific macro-process using digital twin technology. In addition, the use of supercomputers, including quantum computers characterised by particularly large computational capacities for processing very large data sets, can significantly increase the scale of data and information processed within the framework of multi-criteria analyses of natural, climatic, geological, social, economic, etc. macroprocesses taking place and the creation of simulation models concerning them.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is it possible, by combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from, inter alia, a large number of websites and social media portals, cloud computing, satellite analytics, etc., and artificial intelligence in joint applications of building integrated analytical platforms? and artificial intelligence in joint applications for the construction of integrated analytical platforms, is it possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thereby significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence, is it possible to improve the analysis of macroprocesses?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Dariusz Prokopowicz

In your opinion, will the addition of mandatory sustainability reporting according to the European Sustainability Reporting Standards (ESRS) to company and corporate reporting motivate business entities to scale up their sustainability goals?
In your opinion, will the introduction of mandatory enhanced disclosure of sustainability issues help to scale up the implementation of sustainability goals and accelerate the processes of transforming the economy towards a sustainable, green circular economy?
Taking into account the negative aspects of the unsustainable development of the economy, including the over-consumption of natural resources, the increasing scale of environmental pollution, the still high greenhouse gas emissions, the progressing process of global warming, the intensifying negative effects of the climate change taking place, etc., it is necessary to accelerate the processes of carrying out the pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess into a sustainable, green, zero-carbon growth and closed loop economy. One of the key determinants for achieving the aforementioned green transformation of the economy is also the implementation of the Sustainable Development Goals, i.e. according to the UN standard 17 Sustainable Development Goals. In recent years, many companies and enterprises, noticing the growing importance of this issue, including the increasing scale of pro-environmental and pro-climate awareness of citizens, i.e. customers of their offers of companies and enterprises, add to their missions and development strategies the issues of implementation of sustainable development goals and present themselves and their offers of products and services within advertising campaigns and other forms of marketing communication as green, implementing specific sustainable development goals, environmentally and climate friendly, etc. Unfortunately, this is always in accordance with the fact that the implementation of the sustainable development goals is not a fact. Unfortunately, this is not always consistent with the facts. Research shows that in the European Union, the majority of existing companies and enterprises already carry out this type of marketing communication to a greater or lesser extent. However, a significant proportion of businesses that present themselves as green, pursuing specific sustainability goals, environmentally and climate-friendly, and that present their product and service offerings as green, made exclusively from natural raw materials, and produced fully in line with sustainability goals, are doing so unreliably and misleading potential customers. Many companies and businesses are greenwashing. It is therefore necessary to improve systems for verifying what economic operators present about themselves and their offerings in their marketing communications against the facts. By significantly reducing the scale of greenwashing used by many companies, it will be possible to increase the effectiveness of carrying out the process of green transformation of the economy and really increase the scale of achieving the Sustainable Development Goals. Significant instruments to motivate business operators to conduct marketing communications in a reliable way also include extending the scope of business operators' reporting to include sustainability issues. The addition of sustainability reporting obligations for companies and businesses in line with the European Sustainability Reporting Standards (ESRS) should motivate economic actors to scale up their implementation of the Sustainable Development Goals. In November 2022, the Council of the European Union finally approved the Corporate Sustainability Reporting Directive (CSRD). The Directive requires companies to report on sustainability in accordance with the European Sustainability Reporting Standards (ESRS). This means that under the Directive, more than 3,500 companies in Poland will have to disclose sustainability data. The ESRS standards developed by EFRAG (European Financial Reporting Advisory Group) have been submitted to the European Commission and we are currently waiting for their final form in the form of delegated acts. However, this does not mean that companies should not already be looking at the new obligations. Especially if they have not reported on sustainability issues so far, or have done so to a limited extent. Companies will have to disclose sustainability issues in accordance with ESRS standards. It is therefore essential to build systemic reporting standards for business entities enriched with sustainability issues. In a situation where the addition of sustainability reporting obligations in accordance with the European Sustainability Reporting Standards (ESRS) to company and corporate reporting is effectively carried out, there should be an increased incentive for business entities to scale up their sustainability goals. In this regard, the introduction of enhanced disclosure of sustainability issues should help to increase the scale of implementation of the sustainable development goals and accelerate the processes of transformation of the economy towards a sustainable green circular economy.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
In your opinion, will the introduction of mandatory enhanced disclosure of sustainability issues help to scale up the implementation of the Sustainable Development Goals and accelerate the processes of transformation of the economy towards a sustainable, green circular economy?
In your opinion, will the addition of mandatory sustainability reporting to companies and businesses in line with the European Sustainability Reporting Standards (ESRS) motivate business entities to scale up the implementation of the Sustainable Development Goals?
Will the extension of sustainability reporting by business entities motivate companies to scale up their sustainability goals?
What challenges do companies and businesses face in relation to the obligation for expanded disclosure of sustainability issues?
What do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to discussing scientific issues and not the ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Best wishes,
Dariusz Prokopowicz

A fundamental question at artificial intelligence (AI) informatics scientists: Are information and artificial and biological intelligence non-causal, not based on energy?
I am now finalizing a book on this theme. It is theoretically very fundamental to AI and biological intelligence (BI).
I create a system of thought that yields Universal Causality in all sciences and also in AI and BI.
I invite your ideas. I have already uploaded a short document on this in my RG page. Kindly read it and comment here.
The book is supposed to appear at some time after Dec 2023 in English and Italian, and then in Spanish. Will keep you informed.
What, in your opinion, are the negative effects of the low level of economic knowledge of society and what can the low level of economic knowledge of a significant part of citizens in society lead to?
A recent survey shows that only 60 per cent of the public in Poland knows what inflation is, including the awareness that a drop in inflation from a high level means that prices are still rising but more slowly. In Poland, in February 2023, the government-controlled Central Statistical Office showed consumer inflation at 18.4 per cent. Since March, disinflation has been realised. In April 2023, shown by the Central Statistical Office, consumer inflation stood at 14.7 per cent. the most optimistic forecasts of the central bank cooperating informally with the government, i.e. the National Bank of Poland, suggest that Poland's falling inflation may only fall to single-digit levels in December. After deducting international factors, i.e. the prices of energy raw materials, energy and foodstuffs, core inflation, i.e. that determined by internal factors in Poland, still stands at around 12 per cent. The drop in inflation since March has been largely determined by a reduction in the high, until recently excessively high margins and prices of motor fuels by the government-controlled, monopolistically operating, state-owned gas and fuel concern, which holds over 90 per cent of domestic production and sales of motor fuels. These reductions are the result of criticism in the independent media that this government-controlled concern is acting anti-socially, making excessive profits by maintaining increased margins and not reducing the price of motor fuels until early 2023, despite the fact that the prices of energy raw materials, including oil and natural gas, have already fallen to pre-war levels in Ukraine. Citizens can only find out from the government-independent media what is really happening in the economy. Consequently, in the government-controlled meanstream media, including, among others, the government-controlled so-called public television, other media, including independent media, are constantly criticised and informationally harassed. But back to the issue of economic knowledge of the public. Taking into account the media in Poland, it is the media independent from the PIS government that play an important role in increasing economic awareness and knowledge, including objective presentation of events in the economy, objective and consistent with the fundamentals of economics explanation of how economic processes work. The aforementioned research shows that as many as 40 per cent of citizens in Poland still do not know what inflation is, do not fully understand what the successive decrease in inflation consists in. Some of these 40 per cent of the public assume that a fall in inflation, even from a high level, i.e. the disinflation currently taking place, means that the prices of purchased products and services are supposedly falling. The level of economic knowledge is therefore still low and various dishonest economic actors and institutions take advantage of this. The low level of economic knowledge among the public has often been exploited by para-financial companies, which, in their advertising campaigns and in the presentation of their image as banks, have created financial pyramids that have taken money from the public for unreliable deposits. Many citizens lost their life savings in this way. In Poland, this was the case when the authorities overseeing the financial system inadequately informed citizens about the high risk of losing the money they deposited with such para-banking companies and pseudo-investment companies as Kasa Grobelnego and AmberGold. In addition, the low level of economic knowledge in society also makes it easier for unreliable political options to find support among a significant proportion of citizens in society for populist pseudo-economic policy programmes and, on that basis, also to win parliamentary elections, and to conduct economic policy in a way that leads to financial or economic crises after a few years. It is therefore necessary to develop a system of economic education from primary school onwards, but also in the so-called Universities of the Third Age, which are mainly used by senior citizens. This is important because it is seniors who are most exposed to unreliable, misleading publicity campaigns run by money laundering companies. Thanks to the low level of economic knowledge, the government in Poland, through the medium of the controlled meanstream media, persuades a significant part of the population to support a real anti-social, anti-environmental, anti-climatic, financially unsustainable pseudo economic policy, which leads to high indebtedness of the state financial system, to the continuation of financial and economic crises.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What, in your opinion, are the negative consequences of the low level of economic knowledge of society and what can the low level of economic knowledge of a significant part of citizens in society lead to?
What are the negative consequences of the low level of economic knowledge of the public?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to the discussion of scientific issues and not ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Warm regards,
Dariusz Prokopowicz

Is analytics based on Big Data and artificial intelligence already capable of predicting what we will think about tomorrow, that we need something, that we should perhaps buy something we think we need?
Can an AI-equipped internet robot using the results of research carried out by Big Data advanced socio-economic analytics systems and employed in the call centre department of a company or institution already forecast, in real time, the consumption and purchase needs of a specific internet user on the basis of a conversation with a potential customer and, on this basis, offer internet users the purchase of an offer of products or services that they themselves would probably think they need in a moment?
On the basis of analytics of a bank customer's purchases of products and services, analytics of online payments and settlements and bank card payments, will banks refine their models of their customers' purchase preferences for the use of specific banking products and financial services? for example, will the purchase of a certain type of product or service result in an offer of, for example, a specific insurance or bank loan to a specific customer of the bank?
Will this be an important part of the automation of the processes carried out within the computerised systems concerning customer relations etc. in the context of the development of banking in the years to come?
For years, in databases, data warehouses and Big Data platforms, Internet technology companies have been collecting information on citizens, Internet users, customers using their online information services.
Continuous technological progress increases the possibilities of both obtaining, collecting and processing data on citizens in their role as potential customers, consumers of Internet offers and other media, Internet information services, offers of various types of products and services, advertising campaigns that also influence the general social awareness of citizens and the choices people make concerning various aspects of their lives. The new Industry 4.0 technologies currently being developed, including Big Data Analytics, cloud computing, Internet of Things, Blockchain, cyber security, digital twins, augmented reality, virtual reality and also machine learning, deep learning, neural networks and artificial intelligence will determine the rapid technological progress and development of applications of these technologies in the field of online marketing in the years to come as well. The robots being developed, which collect information on specific content from various websites and webpages, are able to pinpoint information written by internet users on their social media profiles. In this way, it is possible to obtain a large amount of information describing a specific Internet user and, on this basis, it is possible to build up a highly accurate characterisation of a specific Internet user and to create multi-faceted characteristics of customer segments for specific product and service offers. In this way, digital avatars of individual Internet users are built in the Big Data databases of Internet technology companies and/or large e-commerce platforms operating on the Internet, social media portals. The descriptive characteristics of such avatars are so detailed and contain so much information about Internet users that most of the people concerned do not even know how much information specific Internet-based technology companies, e-commerce platforms, social media portals, etc. have about them.
Geolocalisation added to 5G high-speed broadband and information technology and Industry 4.0 has, on the one hand, made it possible to develop analytics for identifying Internet users' shopping preferences, topics of interest, etc., depending on where, specifically geographically, they are at any given time with the smartphone on which they are using certain online information services. On the other hand, the combination of the aforementioned technologies in the various applications developed in the applications installed on the smartphone has made it possible, on the one hand, to increase the scale of data collection on Internet users, and, on the other hand, also to increase the efficiency of the processing of this data and its use in the marketing activities of companies and institutions and the implementation of these operations increasingly in real time in the cloud computing, the presentation of the results of the data processing operations carried out on Internet of Things devices, etc.
It is becoming increasingly common for us to experience situations in which, while walking with a smartphone past some physical shop, bank, company or institution offering certain services, we receive an SMS, banner or message on the Internet portal we have just used on our smartphone informing us of a new promotional offer of products or services of that particular shop, company, institution we have passed by.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
Is analytics based on Big Data and artificial intelligence, conducted in the field of market research, market analysis, the creation of characteristics of target customer segments, already able to forecast what we will think about tomorrow, that we need something, that we might need to buy something that we consider necessary?
Is analytics based on Big Data and artificial intelligence already capable of predicting what we will think about tomorrow?
The text above is my own, written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems such as ChatGPT.
Copyright by Dariusz Prokopowicz
What do you think about this topic?
What is your opinion on this subject?
Please answer,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

How to build a Big Data Analytics system based on artificial intelligence more perfect than ChatGPT that learns but only real information and data?
How to build a Big Data Analytics system, a Big Data Analytics system, analysing information taken from the Internet, an analytics system based on artificial intelligence conducting real-time analytics, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
Well, ChatGPT is not perfect in terms of self-learning new content and perfecting the answers it gives, because it happens to give confirmation answers when there is information or data that is not factually correct in the question formulated by the Internet user. In this way, ChatGPT can learn new content in the process of learning new but also false information, fictitious data, in the framework of the 'discussions' held. Currently, various technology companies are planning to create, develop and implement computerised analytical systems based on artificial intelligence technology similar to ChatGPT, which will find application in various fields of big data analytics, will find application in various fields of business and research work, in various business entities and institutions operating in different sectors and industries of the economy. One of the directions of development of this kind of artificial intelligence technology and applications of this technology are plans to build a system of analysis of large data sets, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on artificial intelligence conducting analytics in real time, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data. Some of the technology companies are already working on this, i.e. on creating this kind of technological solutions and applications of artificial intelligence technology similar to ChatGPT. But presumably many technology start-ups that plan to create, develop and implement business specific technological innovations based on a specific generation of artificial intelligence technology similar to ChatGPPT are also considering undertaking research in this area and perhaps developing a start-up based on a business concept of which technological innovation 4.0, including the aforementioned artificial intelligence technologies, is a key determinant.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build a Big Data Analytics system, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on Artificial Intelligence conducting real-time analytics, integrated with an Internet search engine, but an Artificial Intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

How can the implementation of artificial intelligence help in terms of the automated process of analysing the sentiment of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysing changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms?
How can the computerised analytics system architecture of Big Data Analytics platforms used to analyse the sentiment of Internet users' social media activity be improved using the new technologies of Industry 4.0, including but not limited to artificial intelligence, deep learning, machine learning, etc.?
In recent years, analytics conducted on large data sets downloaded from multiple websites using Big Data Analytics platforms has been developing rapidly. This type of analysis also includes sentiment analyses of changes in Internet users' opinions on specific topics, issues, opinions on product and service offers, brands of companies, public figures, political parties, etc., based on verification of thousands of posts and comments, answers given in discussions posted on social media sites. With the ever-increasing capabilities in terms of computing power of new generations of microprocessors and the speed of processing data stored on increasingly large digital storage media, the importance of increasing the scale of automation of the processes carried out during the aforementioned sentiment analyses is increasing. Certain new technologies of Industry 4.0, including machine learning, deep learning and artificial intelligence, are coming to the aid of this issue. I am conducting research on the process of sentiment analysis of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysis of changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms. I have included the results of these studies in my articles on this subject. I have also posted these articles after publication on my profile of this Research Gate portal. I would like to invite you to join me in scientific cooperation on this issue.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the implementation of artificial intelligence help in terms of the automated process of analysing the sentiment of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysing changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
Please answer with reasons,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz

How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that new startups that are planning to develop implementing innovative business solutions, technological innovations, environmental innovations, energy innovations and other types of innovations?
The economic development of a country is determined by a number of factors, which include the level of innovativeness of economic processes, the creation of new technological solutions in research and development centres, research institutes, laboratories of universities and business entities and their implementation into the economic processes of companies and enterprises. In the modern economy, the level of innovativeness of the economy is also shaped by the effectiveness of innovation policy, which influences the formation of innovative startups and their effective development. The economic activity of innovative startups generates a high investment risk and for the institution financing the development of startups this generates a high credit risk. As a result, many banks do not finance business ventures led by innovative startups. As part of the development of systemic financing programmes for the development of start-ups from national public funds or international innovation support funds, financial grants are organised, which can be provided as non-refundable financial assistance if a startup successfully develops certain business ventures according to the original plan entered in the application for external funding. Non-refundable grant programmes can thus activate the development of innovative business ventures carried out in specific areas, sectors and industries of the economy, including, for example, innovative green business ventures that pursue sustainable development goals and are part of green economy transformation trends. Institutions distributing non-returnable financial grants should constantly improve their systems of analysing the level of innovativeness of business ventures planned to be implemented by startups described in applications for funding as innovative. As part of improving systems for verifying the level of innovativeness of business ventures and the fulfilment of specific set goals, e.g. sustainable development goals, green economy transformation goals, etc., new Industry 4.0 technologies implemented in Business Intelligence analytical platforms can be used. Within the framework of Industry 4.0 technologies, which can be used to improve systems for verifying the level of innovativeness of business ventures, machine learning, deep learning, artificial intelligence (including e.g. ChatGPT), Business Intelligence analytical platforms with implemented Big Data Analytics, cloud computing, multi-criteria simulation models, etc., can be used. In view of the above, in the situation of having at one's disposal appropriate IT equipment, including computers equipped with new generation processors characterised by high computing power, it is possible to use artificial intelligence, e.g. ChatGPT and Big Data Analytics and other Industry 4.0 technologies to analyse the level of innovativeness of new economic projects that plan to develop new start-ups implementing innovative business solutions, technological, ecological, energy and other types of innovations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that plan to develop new startups implementing innovative business solutions, technological innovations, ecological innovations, energy innovations and other types of innovations?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz

Does analytics based on sentiment analysis of changes in Internet user opinion using Big Data Analytics help detect fakenews spread as part of the deliberate spread of disinformation on social media?
The spread of disinformation on social media used by setting up fake profiles and spreading fakenews on these media is becoming increasingly dangerous in terms of the security of not only specific companies and institutions but also the state. The various social media, including those dominating this segment of new online media, however, differ considerably in this respect. The problem is more acute in the case of those social media which are among the most popular and on which mainly young people function, whose world view can be more easily influenced by factual information and other disinformation techniques used on the Internet. Currently, among children and young people, the most popular social media include Tik Tok, Instagram and YouTube. Consequently, in recent months, the development of some social media sites such as Tik Tok is already being restricted by the governments of some countries by banning the use, installation of this application of this portal on smartphones, laptops and other devices used for official purposes by employees of public institutions. These actions are argued by the governments of these countries in order to maintain a certain level of cyber security and reduce the risk of surveillance, theft of data and sensitive, strategic and particularly security-sensitive information of individual institutions, companies and the state. In addition, there have already been more than a few cases of data leaks on other social media portals, telecoms, public institutions, local authorities and others based on hacking into the databases of specific institutions and companies. In Poland, however, the opposite is true. Not only does the organised political group PIS not restrict the use of Tik Tok by employees of public institutions, but it also motivates the use of this portal by politicians of the ruling PIS option to publish videos as part of the ongoing electoral campaign, which would increase the chances of winning parliamentary elections for the third time in autumn this year 2023. According to analysts researching the problem of growing disinformation on the Internet, in highly developed countries it is enough to create 100 000 avatars, i.e. non-existent fictitious persons, created as it were and seemingly functioning thanks to the Internet by creating profiles of these fictitious persons on social media portals referred to as fake profiles created and functioning on these portals, to seriously influence the world view, the general social awareness of Internet users, i.e. usually the majority of citizens in the country. On the other hand, in third world countries, in countries with undemocratic systems of power, all that is needed for this purpose is about 1,000 avatars of these fictitious people with stories modelled, for example, on famous people such as, in Poland, a well-known singer claiming that there is no pandemic and that vaccines are an instrument for increasing control of citizens by the state. The analysis of changes in the world view of Internet users, changes in trends concerning social opinion on specific issues, evaluations of specific product and service offers, brand recognition of companies and institutions can be conducted on the basis of sentiment analysis of changes in the opinion of Internet users using Big Data Analytics. Consequently, this type of analytics can be applied and of great help in detecting factual news disseminated as part of the deliberate spread of disinformation on social media.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Does analytics based on sentiment analysis of changes in the opinions of Internet users using Big Data Analytics help in detecting fakenews spread as part of the deliberate spread of disinformation on social media?
What is your opinion on this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz

We know that knowledge management transcends or goes beyond information management, but what process do you think should be followed so as not to evaluate them separately?
Do new ICT information technologies facilitate the development of scientific collaboration, the development of science?
Do new ICT information technologies facilitate scientific research, the conduct of research activities?
Do new ICT information technologies, internet technologies and/or Industry 4.0 facilitate research?
If so, to what extent, in which areas of your research has this facilitation occurred?
What examples do you know of from your own research and scientific activity that support the claim that new ICT information technologies facilitate research?
What is your opinion on this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

Several leading technology companies are currently working on developing smart glasses that will be able to take over many of the functions currently contained in smartphones.
It will no longer be just Augmented Reality, Street View, enabling interactive connection to Smart City systems, Virtual Reality used in online computer games but many other functions of remote communication and information services.
In view of the above, I address the following questions to the esteemed community of researchers and scientists:
Will smart glasses replace smartphones in the next few years?
Or will thin, flexible interactive panels stuck on the hand prove more convenient to use?
What new technological gadget could replace smartphones in the future?
What do you think about this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Greetings,
Dariusz Prokopowicz

Hi everyone,
We'd like to open a huge topic by a literature systematic review. However, the topic is so broad, the initial search on Web of Science only provided us od over 25 000 papers which met our search criteria. (Sure this can be reduced, but only slightly.)
I'd like to explore computer asisted review's possibilities - there must be some software capable of performing an analysis of some sort. Is there anyone who has experience in this field?
Thank you for your thoughts.
Best regards,
Martin
Do you have any experience or opinion about the accuracy of scientific information? The paper describes the accuracy of Wikipedia. I am experiencing resistance from wiki-bot/automatic response that prevents me from correcting the wrong knowledge. Thank you.
What strategies do you personally follow to manage information overload?
Hi, does anyone know theories related to (improvement of) product information and/or product (detail) page for online retailing? It will be appreciated a lot - thanks!
How to judge the correctness of the obtained information related to COVID-19 and how reliable are the various online sources of this information?!
What should/not we trust?! where to get information!?
This is my only question on logic in RG; there are other questions on applications of logic, that I recommend.
There are any types and number of truth values, not just binary, or two or three. It depends on the finesse desired. Information processing and communication seem to be described by a tri-state system or more, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by SystemVerilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffer. This should be more evident in quantum computing, offering new vistas, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
How does one combine the basis of Quantum Physics that the information cannot be destroyed with the GR statement that black holes destroy the info?
Eric Kandel (2006) has revealed that the consolidation of memory at the level of the nucleus is a bipolar process: chemical agents exist in our cells that can either potentiate or suppress memory. Having a double-ended system prevents extremes: remembering everything or remembering nothing. As a child we are rewarded for remembering everything we are taught in school under the assumption that all knowledge is good. But what happens if the knowledge is tainted such as that the black slaves on plantations enjoyed being supported by the white slave owners, that the Holocaust was a fabrication, that the recent election in the United States was rigged, that vaccines produce massive side-effects, that drinking Clorox is an effective way to kill Covid-19, and so on. It is instructive that Albert Einstein was not a great student (i.e., did not like to memorize things and he had difficulty in his second language, French, which he needed to complete his university entrance exams, Strauss 2016) yet his ability to zero-in on the important data while excluding nonsense is what made him an extraordinary scientist. Ergo, the management of one’s memory may be as important as having a good memory.
References
Kandel ER (2006) In Search of Memory. The Emergence of a New Science of Mind. W.W. Norton & Company Inc., New York.
Strauss V (2016) Was Albert Einstein really a bad student who failed math? The Washington Post, Feb.
Are there any studies that show a positive relationship between the length of a text (word count or number of characters) and its information content?
Could anyone provide any comment and/or references on the measurement of "information depth" ?
By "information depth", I mean more than just the minimum amount of bits to reproduce a given information. It would also have to involve some stuff related to the content and maybe corollary aspects of the information making full part of it (how it is collected in function of the environment ? Its added value in given context ? Its "strength" for further progress ? ...).
For instance, there are the two verses (in French):
Gal, amant de la reine, alla, tour magnanime
Galamment de l'arène à la Tour Magne à Nîmes.
Both verses mean something coherent. But there is also additional information in them :
- they are alexandrines ;
- they are pronounced exactly the same way (the second verse is a full rhyme of the first verse) ;
- there is geographical information : there is an (ancient) arena and a tower called "Magne" in the French city of Nîmes (Provence) ;
- ....
Similarly how could one measure the exchange of information which makes that a transcription factor (in molecular biology) recognizes a given DNA sequence to be translated : for instance A-C-A-G-G-T-A-G-T-C .... (and by the way, how can it "instantaneously" recognize the sequence and only that one sequence which gives the relevant needed protein ? ) ? And how could the process of information exchange be described for e.g. the methylation or demethylation of the right DNA-base (at the right moment), same for cellular division (chromatin-histone compacting / decompacting, ...), for reprogramming in meiosis, etc. (epigenetics) ?
Great attention should be paid to methods of search and selection of sources to establish their credibility and value of information sources.
I'm going to discover any possible association rules among some different events in the environment. for example association between height and incidence of a disease. I'm working on polygons but its possible to convert them to point features. So offer me some of methods and software which provide them.
I know how does the Random Forest works if we have two choices. If apple is Red go left, if Apple is Green go right. and etc.
But for my question, if the data is texts"features" I trained the classifier with training data, I would like to understand deeply how does the algorithm split the node, based on what? the tf-idf weight, or the word itself. In addition, how did it precidt the class for each example.
I would really appreciate a very explained in details with example in texts.
I have 600 examples on my dataset for classification task. Number for examples labeled in each class is different. ClassA has 300 examples, ClassB has 150 examples, ClassC has 150 examples.
I read many papers and resources about splitting data into two or three parts, train-validation- and test. Some are saying if you have limited data then no need for wasting time and three parts. Two parts (train-test) is enough giving 70% for training, and 30% for testing. And using 5-flogs metric also is the ideal one for limited.
Some are saying doing 70% for training ( and the validation data taken from the training data itself for 30%) , and test for the remaining 30% from the original data.
From your experience, could you tell me your thoughts and suggestions about this mystery?
Thank you
Does anybody have an example of research conducted in one area of science that when cross pollinated with research outputs from a different field of research, led to a breakthrough in a completely new area.
I am specifically looking for examples of research being conducted in two completely different areas and with no obvious connection that when brought together, by whatever means, led to a new discovery, process, or solution to an outstanding problem.
I have read your paper, "Finding Opinion Strength Using Fuzzy Logic on Web Reviews". It is really a good paper.
I want to ask at page 42 (page 6), you have mentioned seven pieces of data, namely Nikon D3SLR, Olympus FE- 210, Cannon 300, Cannon EOS40D, FUZIFLIM S9000, Sony Cyber shot DSCH 10, and Kodak M1033.
How can you extract these data from the corresponding website? Do you have any public package for these data? I am very interesting and want to test these data. Thanks very much.
And you mentioned, "our system has a good accuracy in predicting product ranking". I am sorry that I have not seen the comparative experiments in the paper. How can you get this conclusion? Thanks for giving any advice.
In our Age of Information the physical Theory of Information by SHANNON is to narrow - he excluded "psychological considerations". But today the importance of this term is too great - we need a unified definition for a l l sciences !
My results see at http://www.plbg.at, but they are only my ideas. They try to find a valid and acceptable abstraction over all sciences.
Thanks to Number Theory, we had been studying numbers and their properties since a long time now. Dealing with numbers usually involves trying to find out the existence of the certain special magical powers they possess, if any. My question rotates around some of the immediate clinical aspects:
Is there a general, generic, genetic manner in which numbers can be used as a memory storage unit? Is there a measure of how much information can be stored in numbers and representations of them? Is it possible to find how many numbers are there?
Let's use an example. We have a function y = f(x), in which x is the input (the probability) and y is the output (the entropy). If we change y in y', can we find an x' such that f(x') = y'?
In other words, I know that when p changes, H changes; is it possible the opposite, such that if H changes, p changes?
Is measurement the ultimate common denominator between these three movements?
Information analysis as a discipline belonging to information science. Specifically the behavior of the information published by governments electronic on social networks.
Thinking, insight, equations and gedanken experiments are all other words for information in a general sense . If this is true then it would seem that information has to be exchanged in the universe before cause and effect is observed or constructed. Information, therefore, applies to every discipline although it may be called something else in that discipline.
What are Semantics of Business Vocabulary and Business Rules (SBVR) Models and what are Information System Models? Does UML Business model or Information System model.
I have taken effort to produce such software and am seeking persons who have suitable application for such sequences. Currently, the software will compute shustrings on the integers and the nucleotides but will generate maximally (or uniformly) disordered sequences only on the integers; the nucleotides is coming, as is binary and a user selectable symbol set. For now, understanding the application areas for such sequences, and the provision of same to interested parties, with the hope that users will provide feedback, is solicited.
This software generates sequences in integral powers of ten, up to length one hundred million digits. A sequence of one hundred million digits takes just 30 seconds to produce.
Hi Gurus, I have a set of documents and I want to know the topic about which these documents are. Is it the issue of topic modeling? Is there any software or technique where I give this set of documents as input and it gives me its topic - may be using a kind of taxonomy or what. Can any body explain both theoretical and practical. Thanks a lot
Products in final assembly become more and more complex. I am investigating how companies develop their information support to the operator and would like to know if anyone else has done some research in this area?
Example of RQs:
What media (text, pictures, movies) is best to present information, in terms of
Quality?
Personalised instructions?
time-saving?
cost-saving?
social sustainability?
available ICT?
I want to study the process of appropriation of the information through virtual communities of Facebook. Would you help me with this?
I want to know if you could advice me to provide a review of the literature on these concepts:
-a / the concept of information (not the information system)
-b / the adoption of information (not the adoption of information systems)
-c / the acquisition of information (and not the acquisition of information systems)
-d / the ownership of the information (not the ownership of the information system)
Can we classify them in this order:
-1 / Adoption information;
-2 / Acquisition of information;
-3 / Ownership information
Thanks a lot
Hello, I want to compute the so-called IRT Item Information Function for individual items as well as the latent variable using Stata. Several methods are available for IRT analysis, like clogit, gllamm or raschtest, but so far I could not find any syntax to draw the Information Function using anyone of these methods. So, any help or example is much appreciated.
Non-market production of information has been gaining ground for the last 15 years or so. New social forms of information production, facilitated by networks are becoming counter intuitive to people living in market-based, economies. Individuals can reach and inform million others around the world. This fact has led to the emergence of coordinated effects, where the aggregate effect of individual action, even if it is not self-consciously cooperative produces the coordinated effect of a rich information environment. Based on this empirical state of affairs, do you think that information networks present an alternative to traditional market production of information?
Brain-to-brain transfer of information has been illustrated between a pair of rats (Pais-Vieira et al. 2013). We evaluate the scientific validity of this study. First, the rats receiving the electrical stimulation were performing at 62 to 64% correctness when chance was 50% correctness using one of two discrimination paradigms, tactile or visual. This level of performance is not sustainable without being imbedded within a behavioural paradigm that delivers reward periodically. Second, we estimated that the amount of information transferred between the rats was 0.004 bits per second employing the visual discrimination paradigm and 0.015 bits per second employing the tactile discrimination paradigm. The reason for these low transfer scores (i.e. rates that are 1 to 2 orders of magnitude lower than that transferred by brain-machine interfaces) is that overall the rats were performing close to chance. Nevertheless, based on these results Pais-Vieira et al. have suggested that the next step is to extend their studies to multiple brain communication. We would suggest that the information transfer rate for brain-to-brain communication be enhanced before performing such an experiment. Note that the information transfer rate for human language can be as high as 40 bits per second (Reed and Durlach 1998).
For more information see: Tehovnik EJ & Teixeira e Silva Z (2014) Brain-to-brain interface for real-time sharing of sensorimotor information: a commentary. OA Neurosciences, Jan 01;2(1):2.
From an information management point of view.
How can I calculate information entropy between wavelet coefficients and signals?