Science topics: Information Systems (Business Informatics)Information Analysis
Science topic
Information Analysis - Science topic
Explore the latest questions and answers in Information Analysis, and find Information Analysis experts.
Questions related to Information Analysis
Single-cell organisms can be conditioned (Saigusa et al. 2008); therefore, it should be expected that single cells in the neocortex can also be conditioned (Prsa et al. 2017). In the study of Prsa et al. (2017), a cell was conditioned in the motor cortex of a mouse (as evidenced by two-photon imaging) and feedback of successful conditioning was achieved by optogenetic activation of cells in the somatosensory cortex. A mouse (with head-fixed) was rewarded with a drop of water following volitional discharge of a motor cell using the method of Fetz (1969). The conditioning was achieved after 5 minutes of practice. Furthermore, a group of three cells was conditioned such that two cells were made to fire at a high rate and one cell was made to fire at a low rate, which indicates the inherent plasticity of the nervous system.
This is the first example of the brain having single-cell resolution for transferring information. It is thus not surprising that the brain of a human being (which contains ~ 100 billion neurons) can transfer 40 bits per second (over a trillion possibilities per second, i.e. 2^40 per second) when engaged in language execution (Reed and Durlach 1998), but only after many years of training. If we assume that each cell in the human brain has (on average) at least 10 levels of firing-frequency, then 100 billion neurons should be able to transfer 1 trillion output possibilities (i.e. 10 x 100 billion) or about 40 bits of information, and all done in a second.
And to free up memory space for running the heart and lungs we have information chunking (Miller 1956), so that a concept like ‘E = mc^2’ (as developed by Einstein) can be memorized and used to extract pertinent information as stored in any physics library (Clark 1998; Varela et al. 1991). The availability of books following the development of the printing press in 1436 (by Johannes Gutenberg) has contributed to world literacy by amplifying the information available to the human brain. Artificial intelligence will, no doubt, further enhance the amplification. In fact, much of what I have written over the years has been supported by Google/ResearchGate/AI—and this is without ever using chat-GPT to compose a text.
Why is it that online social media, popular among children and adolescents, continues to use algorithms that promote not only positive but also negative content? Is it just a matter of increasing the reach of the dissemination of certain content, entries, posts, comments, tags, promotional banners, advertising videos, etc. in order to generate higher viewership, more views of certain posts, in order to increase the rates of financial payments for advertisements posted on social media sites, in order to increase the already huge profits of the owners of these sites? Should social media sites pay for recovery, treatment of mental disorders caused by the hejterity directed against them on online social media, maintenance of rehab clinics for treating children and adolescents from online social media addiction, etc.?
Algorithms that promote both positive and negative content on social media, especially popular among children and adolescents, are the result of complex mechanisms of the platforms and their business model. These algorithms are designed to maximize user engagement, which translates into increased time spent on the platform, more ad impressions and higher profits for portal owners. Unfortunately, one of the side effects of these strategies is the promotion of controversial or negative content, which often generates stronger emotions such as anger, fear or outrage. This type of content tends to spread quickly, as users are eager to comment, share and engage with it, which increases its reach.
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
I would like to invite you to join me in scientific cooperation,
Dariusz Prokopowicz
Sherrington (1906) predicted that the neocortex mediates all program changes to movement (which is regulated by consciousness/learning, Hebb 1949, 1968), whereas the cerebellum maintains a steady flow of movement once the changes are put in place (that is, once the efference-copy code is reset in the cerebellum via neocortical intervention, Chen 2019; Cullen 2015; De Zeeuw 2021; Fukutomi and Carlson 2020; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik et al. 2021; Wang et al. 2023). That the human brain is composed of 86 billion neurons with the neocortex accounting for 16 billion and the cerebellum accounting for 69 billion leaving some one billion for the remaining structures (Herculano-Houzel 2009) is well accepted. The remaining one billion neurons (1% of the total) are left for sorting out functional details as they pertain to the olfactory bulb for the sense of smell and the thalamus for relaying gustatory, somatosensory, vestibular, auditory, and visual information. Moreover, the superior colliculus is for orienting toward and away from external stimuli, and the hypothalamus is connected to the hormonal and vascular system, and finally various brain stem nuclei such as the locus coeruleus are for transitioning between wakefulness and sleep and the substantia nigra is for mediating behavioral drives, i.e., the speed of emotive responses. Lastly, we cannot forget innervations of the autonomic and ocular and skeletal nuclei situated in the brain stem and spinal cord to finalize glandular secretions and muscle contractions. The 1% of neurons in the human brain—without which death would ensue—is present in all vertebrates. Thus, what distinguishes humans from other vertebrates is the ratio of neurons utilized for information storage in the telencephalon (i.e., neocortex of mammals) and cerebellum versus the neurons in the brain stem and spinal cord, which are required to maintain the organism (see Figure 28). It is obvious that consciousness must scale with this ratio in vertebrates (Hebb 1968; Koch 2013; Morgan 1900). As for invertebrates, a similar segregation between the capacity to store information and to maintain the organism must exist. Just how segregated these two properties are amongst the ganglia of invertebrates needs clarification. Indeed, ants have a communication system based on pheromones with a throughput of 1.4 bits per second (Tehovnik et al. 2021), which is based on a 20-item pheromone alphabet (Hölldobler and Wilson 1990; McIndoo 1914).[1] It is unclear where this alphabet is stored, but some have suggested that information related to pheromone communication is house separately from the general olfactory sense (Nishikawa et al. 2012).
Footnote:
[1]The bit-rate is low because the olfactory system is slow acting, taking over a second to be engaged (McIndoo 1914).
Figure 28. The vertebrate brain is made up of the telencephalon (cerebrum that includes the hippocampus), the cerebellum, the optic tectum, and the olfactory bulb. Not labelled is the brain stem, and not shown is the spinal cord. The cerebellum in the lamprey and amphibian is small and therefore not marked; it sits on top of the brain stem. The telencephalon co-evolved with the cerebellum, since the two structures work in tandem for regulating sensation and movement and they are connected anatomically in all vertebrates (Cheng et al. 2014; Murakami 2017; Murray et al. 2017; Nieuwenhuys 1977; Xu et al. 2016). The sizes of the brains are not to scale.
Can the use of ChatGPT and other AI technologies pose a threat to human jobs, or can this technology be seen as a complement to enhance productivity and creativity in the workforce?
Aficionados of brain machine interfaces (BMI) have the goal of hooking up the neocortical neurons of an individual with spinal cord or brain stem damage to have him or her control a device with the neurons to restore walking or communication (Birbaumer et al. 1999; Hochberg et al. 2006; Shenoy et al. 2003; Taylor et al. 2002; Wessberg et al. 2000). Since single-cell organisms can be conditioned (Saigusa et al. 2008), it should not be surprising that a single cell of the neocortex can also be conditioned for BMI development. In the study of Prsa et al. (2017), a single neuron of a head-fixed mouse was conditioned in the motor cortex (as measured with two-photon imaging), and feedback of successful conditioning was achieved by optogenetic activation of cells in the somatosensory cortex. The mouse was rewarded with a drop of water following the volitional discharge of a motor cell using the method of Fetz (1969). The conditioning was achieved after 5 minutes of practice, which highlights that the neocortex has a tremendous capacity for making associations (as already discussed), and this is why the neocortex has been the focus of BMI development (Tehovnik et al. 2013).
For an amoeba to learn it must be able to transmit information through its cell membrane so that the internal state of the cell can be modified and have the information stored for long-term use (Nakagaki et al. 2000; Saigusa et al. 2008). As mentioned, like single-cell organisms, multicellular organisms must also internalize and store changes to the environment for learning. The success of BMI, therefore, depends on the extent of feedback during learning (Birbaumer 2006). In the study of Prsa et al. (2017) the feedback came from two sources: from the activation of a population of neurons in the somatosensory cortex and from the delivery of reward which would have engaged the reward circuits of the brain (Olds and Milner 1954; Olds 1958; Pallikaras and Shizgal 2022; Yeomans et al. 1988). In the study of Fetz (1969), monkeys were conditioned by associating neural responses in the motor cortex with the delivery of a reward. It was found that this association could be abolished by cutting the proprioceptive input (Wyler and Burchiel 1978; Wyler et al. 1979), given that when learning the association monkeys often moved their limbs to drive the neurons in the motor cortex to facilitate reward delivery (Fetz 1969; Fetz and Baker 1973; Fetz and Finocchio 1971, 1972). It is noteworthy that following transection of the spinal cord to abolish proprioceptive input, a monkey was observed moving its flaccid arm by the functional arm to try to drive the cells in the motor cortex to obtain a reward (Wyler et al. 1979).
As the sensory feedback of a monkey is reduced on a BMI task, the extent of modulation of the neocortical neurons during task performance is reduced (Tehovnik et al 2013). A monkey was trained to use a manipulandum to move a cursor from the center of a computer monitor to acquire a peripherally located visual target in exchange for a reward (O’Doherty et al. 2011). Three conditions were considered as neurons in the neocortex were activated to move the cursor: (1) moving the hand-held manipulandum to acquire the target, (2) having the hand-held manipulandum fixed in place as the target was being acquired, and (3) having no manipulandum and allowing the animal to free-view the monitor to acquire the target. Going from condition 1 to condition 3, it was found that the modulation of the neocortical neurons dropped by 80%. Thus, as one reduces the number of feedback channels for BMI, expect the firing of the neocortical neurons to decline. This has direct implications for patients who are paralyzed and must therefore rely on the non-tactile and non-proprioceptive senses to engage the neurons of the neocortex.
Another factor that affects the BMI signal is that as the number of recorded neurons in the neocortex surpasses 40 using an electrode array, the information transmitted to drive an external device begins to saturate (Figure 32, Tehovnik and Chen 2015). The best area of the neocortex for getting an optimal BMI signal is in the motor cortex when trying to move an external device that is based on the movements of the forelimbs that engages the visual system, for example (Tehovnik et al. 2013). Also, primary cortical areas are superior to association areas for electrode implantation, since the best signals for BMI are found in areas M1, S1, and A1 (Lorach et al. 2023; Martin et al. 2014; Metzger et al. 2023; Tehovnik and Chen 2015; Tehovnik et al. 2013; Willett et al. 2021, 2023).
Furthermore, it has been known for some time that for a human subject to operate a BMI device using the neocortex, tremendous concentration is necessary and its seems that even with practice the amount of concentration is never reduced (Bublitz et al. 2018). As discussed, a central feature of learning via the neocortex is that with the learning of a task the behavior becomes automated, thereby bringing about the reduction of needed CNS (central nervous system) neurons to perform a task. There is no evidence that neocortically-based BMIs can be automated, since devices need to be calibrated daily (Ganguly and Carmena 2009). Finally, it is known that the neurons in the neocortex, e.g., in area M1, do not follow every behavior faithfully given that the signals are highly variable and prone to wandering when examined across days and months (Gallego et al. 2020; Rokni et al. 2007; Schaeffer and Aksenova 2018). If the neocortex is indeed the center of consciousness (as is presumed here) then one should expect that the neurons in this part of the brain do not discharge lawfully to every motor response as do the motor neurons in the brain stem and spinal cord (Schiller and Tehovnik 2015; Sherrington 1906; Vanderwolf 2007). Whether implanting neurons in the cerebellum might overcome some of the shortcomings found for the neocortex should be considered.
So, how much information is transmitted by a BMI device in bits per second when recording from the neocortex? In 2013, the amount of information transmitted averaged 0.2 bits per second, which was based on work done on behaving primates as well as human subjects (Tehovnik et al. 2013). This value is comparable to the amount of information transmitted by Stephen Hawking (who suffered from amyotrophic lateral sclerosis, ALS) using his cheek muscle at 0.1 bits per second (corrected for information redundancy and based on data from De Lange 2011).[1] This means that at this time there would not have been any advantage for Hawking to use a BMI.
Several studies have been done in recent years that have increased the information transfer rate of BMIs above 1 bit per second. Metzger et al. (2023) developed a BMI to recover language in a patient that had experienced a brain stem stroke that abolished speaking and eliminated the ability to type. A 253-channel electrocortical array was placed on the surface of the sensorimotor cortex over areas that mediate facial movements. It was found that as the subject engaged in the silent reading of sentences, signals could be extracted from the neocortex (with the assistance of artificial intelligence) to generate text at a rate of 78 words per minute at a percent correctness of 75%. This translates into 2.5 bits of information per second, or 5.7 possibilities per second [to derive the bit-rate corrections were made for information redundancy, Reed and Durlach (1998); see Tehovnik et al. (2013) for other details]. This value is consistent with what has been reported by others using depth electrodes implanted in the motor cortex of the face and hand area for silent reading and imagined writing (i.e., ranging from 1.2 to 2.1 bits per second while using the assistance of artificial intelligence, Willett et al. 2021, 2023; also see Metzger et al. 2022).[2] Overall, 1.2 to 2.5 bits per second is to predict 2 to 6 possibilities per second, which is far short of the performance of a cochlear implant (which can predict over 1,000 possibilities per second, Baranauskas 2014), and way short of normal language (which can predict over a trillion possibilities per second, Reed and Durlach 1998).
As for restoring locomotion to spinal cord patients, a major effort was made by Miguel Nicolelis to fit a paralyzed patient in an exoskeleton such that signals were collected from the subject’s neocortex to have him kick a soccer ball with the exoskeleton, which was used to open the 2014 FIFA World Cup (Nicolelis 2019). Realizing that the demonstration was not working, FIFA and the media networks cancelled the broadcast before having the failure transmitted throughout the world (Tehovnik 2017b). Nevertheless, since this time investigators have not given up on the idea of restoring locomotor functions to patients with spinal cord damage. Similar to the study of Ethier et al. (2012), who found that activity from the neocortex could be used to contract the skeletal muscles by discharging the cells in the spinal cord of a monkey, Lorach et al. (2023) found that signals from the sensorimotor cortex of a paralyzed patient could drive the muscles in the legs by having the cortical signals transmitted to the lumbar spinal cord. Recordings were made from each hemisphere using an array of 64 epidural electrodes positioned over each somatosensory cortex. When the patient thought about moving his legs the signal generated in the neocortex stimulated an array of 16 electrodes positioned over the dorsal lumbar spinal cord, such that some combination of 8 electrodes was activated over the dorsal roots of the left spinal cord and some combination of the remaining electrodes was activated over the dorsal roots of the right spinal cord. Consistent with the anatomy, the right neocortex (upon thinking to move) engaged the left spinal cord and the left neocortex engaged the right spinal cord, which elicited a stepping response at a latency of ~100 ms following the discharge of the neocortical neurons, which matches the normal latency.
The walking induced by the implants was slower than that found for an intact system and the patient typically had to walk with the assistance of crutches since postural support was impaired. The minimal number of muscles utilized to walk is eight per leg (including gluteus maximus, gluteus medius, vasti, rectus femoris, hamstrings, gastrocnemius, soleus, and dorsiflexors) for a total of sixteen muscles required (Lorach et al. 2023; Liu et al. 2008). Accordingly, if a ‘0’ and ‘1’ are assigned to the absence and presence of a muscle contraction, then a minimum of 16 bits of information is needed to perform a stepping response. It took the paralyzed patient 4.6 seconds to complete a step (derived from Figure 4f of Lorach et al. 2023), whereas a normal subject takes a 10th of this time to complete a step (based on the step duration of one of the authors). Therefore, the information transferred by the patient was 3.5 bits per second (16 bits/4.6 sec) and that transferred by a normal subject would be 35 bits per second (16 bits/0.46 sec), namely, one order of magnitude less for the patient. Finally, it was found that in the absence of the neocortical implant but having stimulation delivered to the spinal cord implant, the patient could still walk but at a bit-rate of 3 bits per second (derived from Figure 4f or Lorach et al. 2023). Thus, the neocortex added 0.5 bits per second to the information throughput.
Following from the foregoing ‘minimal’ analysis, if each skeletal muscle in the body represents 1 bit of information, then the entire collection of muscles in the body (totaling 700, Tortora and Grabowski 1996) represents 700 bits. We know that language generation requires a minimum of 20 or so skeletal muscles (Simonyan and Horwitz 2011) or 20 bits of information.[3] Generating a muscle contraction every 500 ms would put the bit-rate for language up to 40 bits per second. Accordingly, the skeleto-motor throughput by itself falls well-short of the trillion bits per second estimated for the neocortex or the cerebellum, suggesting further that the information transfer capacity of these structures is mainly for internal use.
Summary
1. A neocortically-based BMI—like a functional brain—is dependent on feedback from the senses to remain operative. The more feedback channels available, the better the signal.
2. An information ceiling occurs when recording from more than 40 neurons in the neocortex using implanted electrode arrays.
3. Neural signals derived from the neocortex are not good for long-term use, and therefore a device would need to be recalibrated daily. Furthermore, to operate such a device requires much concentration on the part of a patient, since the signal does not seem amenable to automation for long-term functionality.
4. In 2013, the amount of information transmitted by a BMI averaged 0.2 bits per second. At this time, it would not have made sense for Stephen Hawking to use such a device to overcome ALS.
5. When electrodes are centered on the writing or speech areas of the motor cortex the amount of information transmitted by the neurons ranges from 1.2 to 2.5 bits per second. This translates into accurately predicting 2 to 6 possibilities per second, which is far short of the performance of a cochlear implant—which can predict over 1,000 possibilities per second—and way short of normal language—which can predict over a trillion possibilities per second.
6. To elicit a stepping response with a BMI, a throughput of 3.5 bits per second has been achieved. This rate is one order of magnitude below the required rate of 35 bits per second to produce a stepping response.
Footnotes:
[1] To determine the information transfer rate from behavioral performance data, see Tehovnik and Chen 2015; Tehovnik et al. 2013).
[2] For the imagined writing, the hand contralateral to the implant was somewhat functional through movement, which could have contributed to the imagined writing (Willett et al. 2021).
[3] A total of 100 muscles are used for speech which control the voice, swallowing, and breathing (Simonyan and Horwitz 2011).
Figure 32. Normalized BMI signal is plotted as a function of neurons. See Tehovnik and Chen (2015) for details. (file: auto_003.gif)
Using a variation of the classical conditioning paradigm with electrical stimulation of neural tissue in behaving primates, Robert Doty (1965, 1969) was able to deduce the ‘sensational’ coding operations of the sensory maps of the neocortex by converting a classical conditioning task into an operant task (also see: Bartlett, Doty et al. 2005; Bartlett and Doty 1980; Doty et al. 1980). Monkeys were trained to depress or release a lever for reward, to signal the detection of electricity delivered to the neocortex. For sensory maps such as area V1, for example, if a monkey was trained to detect electricity delivered to one site and then the electrode was moved to another location within the map (whether ipsilateral or contralateral), the detection response was transferred immediately, much like what happens when a monkey is trained to detect a visual stimulus in one part of the visual field: it can afterwards generalize the response to any location within the visual field immediately (Schiller and Tehovnik 2015). But if the electrode is moved to extrastriate area V4, for example, the detection response acquired by stimulation of V1 is not transferred to V4. New training is required to associate the percept generated by electrical stimulation of V4 and the motor response to obtain a reward. This suggests that the percepts generated by neocortical stimulation are bound per map (Bartlett, Doty et al. 2005). This result concurs with the work of Penfield and colleagues who found that common sensations—i.e., qualitatively similar phosphenes—were evoked from a cortical topographic map (Penfield 1975; Penfield and Rasmussen 1952). Hence, individual maps of the neocortex define sensation or conscious experience, and this sensation depends on the connectivity between the neurons of a map for the immediate transfer of information.
Most significantly, when the forgoing experiment was done in the hippocampal formation, there was never any transfer of the detection response between the different stimulation sites (Knight 1964). This suggests that the hippocampal fibres transmit information independently to and from the neocortex, which is what one would want of a hippocampal pathway mediating the consolidation and retrieval of information vis-à-vis the neocortex (Corkin 2002; Rolls 2004; Penfield and Roberts 1967; Schwarzlose, Kanwisher et al. 2005; Scoville and Milner 1957; Squire et al. 2001). The neocortex contains information that is highly distributed and this information must be recomposed to drive behavior volitionally or a conscious state volitionally, e.g., thinking about biology and consciousness (Corkin 2002; Hebb 1949; Ibayashi et al. 2018; Kimura 1993; Ojemann 1991; Penfield 1975; Sacks 2012; Sereno et al. 2022; Squire et al. 2001; Vanderwolf 2007). The stream of consciousness, as introduced by James (1890), depends on recomposing (or unifying) the cortical information so that the outputs make sense. Schizophrenia is a condition whereby the outputs make no sense.
Summary:
(1) Individual maps of the neocortex define sensation or conscious experience, and this sensation depends on the connectivity between the neurons of a map.
(2) Pathways transmitting information to and from the neocortex to consolidate and retrieve information are composed of neurons that are independent, so that information is stored flexibly throughout the neocortex.
(3) Neocortical information must be recomposed correctly for one not to be diagnosed as a schizophrenic.
Why is there still no information on product packaging and advertisements about the level of CO2 emissions from the production process of a specific product?
Why is there still no information on product packaging and advertising about the level of emissions and environmental pollution generated during the production process of a specific product?
On the packaging of anti-products, products that are harmful to health, stimulants such as cigarettes, there is information warning the citizens of consumers about the harmfulness to health of smoking cigarettes. If citizens have succeeded, through activists in NGOs and some political circles, in lobbying for compulsory information on the harmfulness to health, the risk of developing lung cancer and other diseases resulting from smoking, cigarettes, why analogous solutions still do not work for other types of anti-products and harmful stimulants, including stimulants to which one can easily become addicted. Besides, one of the most negative aspects of using some other stimulants is causing car accidents after drinking and driving. Many fatal car accidents are caused after taking intoxicating stimulants such as alcohol and, for example, some painkillers, tranquilizers and the like. So why is it that the packaging of these drugs and their advertisements do not warn about the negative consequences of their improper use.
Analogous is the issue of the problem of emissivity, i.e. the emission of CO2, methane and other greenhouse gases and environmental pollution by waste emitted during the production process that is harmful to human health and the state of the biosphere and biodiversity of the planet's natural ecosystems. After all, it has been known for years that greenhouse gas emissions generated by production and other economic processes are a major factor in the realized greenhouse effect of the planet's atmosphere since the beginning of the first industrial revolution and the increasingly rapid process of global warming. Many citizens, with a view to the future of their children's and grandchildren's livelihoods in the future of the next decades of time, would probably choose among highly substitutable products and/or services those whose production process involves lower levels of greenhouse gas emissions and/or lower emissions of polluting wastes that are harmful to human health and to the state of the biosphere and biodiversity. Apparently, the activities of pro-social, pro-environmental and pro-climate non-governmental organizations and activist organizations carrying out pro-social, pro-environmental and/or pro-climate activities are too weak in the face of lobbying by business, industrial corporations, many companies and enterprises that can afford to finance election campaigns of political parties pursuing their business interests.
I am conducting research on this issue. I have included the conclusions of my research in the following article:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
I invite you to get acquainted with the issues described in the publications given above and to scientific cooperation in these issues.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why is there still no information on product packaging and advertising about the level of emissions and environmental pollution generated during the production process of a particular product?
Why is there still no information on product packaging and advertising about the level of CO2 emissions from the production process of a specific product?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Embodiment is the idea that the brain does not need a detailed representation of the world, since the world is always present to organisms via an intact sensorimotor apparatus (Clark 1998). An extreme example of embodiment is the way in which the late Stephen Hawking (who suffered from the neurodegenerative disease amyotrophic lateral sclerosis, ALS) delivered his university lectures at Cambridge. Although he could communicate at only 0.1 bits per second [corrected for information redundancy, Reed and Durlach 1998] using a synthetic device (that was responsive to his cheek muscles, De Lange 2011) his lecture could be delivered at a normal rate of ~ 40 bits per second. Like most professors, his lecture would need to be prepared in advance. But in addition, the interface used by Hawking for communication was programmed with a word-prediction algorithm that had access to the entire lecture (Denman et al. 1997). Based on the characters initially uttered by Hawking, complete paragraphs could be summoned and delivered automatically through his voice synthesizer. Thus, there was no need for Hawking to memorize his lecture (which is also true for many of us who prepare slides in advance). In the absence of the algorithm, however, I am sure he would have had no problem communicating the contents of his lecture—but at a rate of 0.1 bits per second, which is far too slow for anyone to understand his speech. It is noteworthy that many people with Hawking’s condition pass away within several years of being overcome by ALS. For Hawking, it was his love of physics that kept him alive for his many decades of productive existence.
Samuel Morse, the inventor of the Morse Code, understood that certain letters in the English language occurred more frequently than others (Gallistel and King 2010). To deal with this, Morse used one dot to represent the most frequently occurring letter in the language, the letter ‘E’, and he used multiple symbols [two dashes followed by two dots] to represent the least frequently occurring letter in the language, the letter ‘Z’. When Shannon was developing his communication theory, he along with Fano was able to compress the information transmitted by having the most frequently occurring words requiring fewer bits than the least frequently occurring words. This became known as Shannon-Fano Coding.
How does the brain go about compressing information? In patients who had their vision restored later in life after missing out on the critical period of development, when presented with a visual image they are unable to resolve a collage of colors and shades to make sense of the world, even though their retinal receptors are intact (Fine et al. 2003; Gregory 2003; Gregory and Wallace 2001; Kurson 2007). The ability to discriminate visually only comes about if the temporal cortices are made functional through learning, which allows one to resolve and identify visual objects, an attribute of all mammals (Bruce et al. 1981; Fine et al. 2003; Froudarakis et al. 2019).
In the laboratory of Peter Schiller, there was the idea that the brain does not have the capacity to store all images that are out there. This thinking is flawed. When was the last time you were challenged to memorize all the images out there? It is the process of learning that hones the selection of items so that what is stored is based on utility, i.e., the most frequently occurring items experienced are stored. This means that the information contained in Einstein’s brain is very different from the information contained in Pelé’s brain, but of course both implement similar routines for drinking, eating, fornicating, and so on. Einstein would not do too well on the pitch against Pelé, nor would Pelé do too well on the pitch of physics against Einstein. In short, learning is what allows the brains of animals to be efficient about information storage, thereby implementing a type of Shannon-Fano Coding. And contrary to the advocates of the hard problem, there is no such problem since every human being (including identical twins) is configured differently. Thus, being preoccupied with the hard problem means that you don’t understand biology (e.g., Chalmers 1995, 1997; Koch and Chalmer 2023).
Much has been made of the idea that humans are genetically programmed to learn languages at an early age, suggesting that learning plays a minor role in this process (Chomsky 1959). But we have argued that a large part of being able to speak at an information transfer rate exceeding 40 bits per second (i.e., over a trillion possibilities per second, Coupé et al. 2019; Reed and Durlach 1998) is due to having a one-decade-long formal education in one’s native and secondary languages (Tehovnik, Hasanbegović, Chen 2024). For example, Joseph Conrad, whose native language was Polish and who became a world renowned writer, in his 20’s learned to write in English (Wikipedia/Joseph Conrad/July 11, 2024). In what is now Poland, Conrad was mentored by his father, Apollo Korzeniouwski, who was a writer and later a convicted political activist by the Russian Empire. To escape the political turmoil of eastern Europe, Conrad (to the dislike of his father) exiled himself to England, which marked the start of his writing career. And the rest we know about: ‘Heart of Darkness’, ‘Lord Jim’, ‘Nostromo’, and so on.
The study of second language learning by 20 year olds was investigated by Hosoda et al. (2013). They recruited twenty-four Japanese university students who were serially bilingual with the earliest age of learning English at seven years of age. The students completed a 4-month training course in intensive English study to enhance their vocabulary. They learned 60 words per week for 16 weeks yielding a total of 960 words, which translates into an information transfer rate of 0.0006 bits per second (see Footnote 1), which is appreciably lower than the transfer rate of ~ 40 bits per second for producing speech (Coupé et al. 2019; Reed and Durlach 1998).
Furthermore, there is this belief that learning a language is accelerated in children as compared to adults (Chomsky 1959). By the age of eighteen, one can have memorized some 60,000 words in the English language (Bloom and Markson 1998; Miller 1996), which represents an information consolidation rate of 0.0006 bits per second (see Footnote 2), which is the same as the rate experienced by the Japanese students learning English as a second language as adults (Hosoda et al. 2013).
Two conclusions can be drawn: First, consolidating a language is many orders of magnitude slower than delivering a speech (i.e., 0.0006 bits per second vs. 40 bits per second). Second, the idea that children learn languages at an accelerated rate may not be true. This needs to be properly investigated, however, whereby the rate of language learning (in bits per second) is measured yearly starting neonatally and ending in adulthood. Also, there is more to language than just memorizing words, so linguists will need to design experiments covering all the major parameters of language and express these parameters in terms of bits per unit time. It is time that linguistics (like neuroscience) becomes a quantitative discipline.
Footnote 1: Bit-rate calculation: if each word is made up of 4 letters (on average) then the bit rate of learning (using values by Reed and Durlach 1998) = 1.5 bits per letter x 4 letters/word x 960 words/16 weeks = 360 bits per week = 0.0006 bits/sec. The learning period includes not only the time spend memorizing the words, but also the time required to consolidate the information in the brain, which occurs during sleep and during moments of immobility (Dickey et al. 2022; Marr 1971; Wilson and McNaughton 1994). After the learning there was an increase in the grey matter volume of Broca’s area, the head of the caudate nucleus, and the anterior cingulate cortex; as well, there was an increase in the white matter volume of the inferior frontal-caudate pathway and of connections between Broca’s and Wernicke’s areas (Hosoda et al. 2013). The grey and white matter enhancement correlated with the extent of word memorization.
Footnote 2: Bit-rate calculation: Memorizing 60,000 words in 18 years translates into 360,000 bits of information [i.e., 60,000 words x 4 letters per word x 1.5 bits per letters, Reed and Durlach 1998] or a word consolidation rate of 55 bits per day (or 9 words per day) over eighteen years of life. Therefore, the rate per second is 0.0006 bits per second. For other details see Footnote 1.
How to use artificial intelligence technology and Big Data to help develop critical thinking in young people and the goal of reducing disinformation that targets children and young people through online social media?
Disinformation is currently the most frequently cited problem occurring in social media from which children and young people gain knowledge. Companies engage advertising companies that specialize in running online advertising campaigns, in which advertising spots, videos and banners informing people about promotional offers for products and services sold are posted on social media. The aforementioned online social media are also viewed by children and teenagers. For some of these social media, the primary audiences for profiled information and marketing messages are mainly school-aged youth. Children and adolescents are particularly susceptible to the influence of information transferred through the aforementioned online media. Advertisements are thematically profiled to correlate with issues that are in the field of the main interests of children and adolescents. Unfortunately, many offers of various products and services promoted through online advertising campaigns are not suitable for children and adolescents and/or generate a lot of negative effects. Nowadays, applications based on generative artificial intelligence technology, intelligent chatbots, are increasingly used to generate banners, graphics, photos, videos, animations, advertising spots. With the help of these tools, which are available on the Internet, it is possible to create a photo, graphic or video on the basis of a written command, i.e. a kind of digitally generated works of such high graphic quality that it is very difficult to determine whether they are, for example, authentic photos taken with a camera or smartphone or are supposedly photos generated by an intelligent chatbot. It is especially difficult to resolve this kind of issue for children and young people who view these kinds of artificial intelligence technology-generated "works" used in banners or advertising videos. It is necessary, therefore, that education should develop in children the ability to think critically, to ask questions, to question the veracity of the content of advertisements, not to accept uncritically everything found in online social media. It is essential to add the issue of learning critical thinking to the process of educating children and young people. The goal of such education should be, among other things, to develop in children and young people the ability to identify disinformation, including the increasingly common factoids, deepfakes, etc. in online social media. In connection with the fact that in the creation of disinformation occurring mainly in the aforementioned social media are involved applications based on artificial intelligence, so children and adolescents should, within the framework of education, learn about the applications available on the Internet based on generative artificial intelligence technology, through which it is possible to generate texts, graphics, photos, drawings, animations and videos in a partially automated manner according to a given verbal command. This is how the applications available on the Internet based on the new technologies of Industry 4.0/5.0, including generative artificial intelligence and Big Data technologies, should be used to help develop critical thinking and a kind of resistance to misinformation in young people. During school lessons, students should learn about the capabilities of AI-based applications available on the Internet and use them creatively to develop critical thinking skills. In this way, it is possible to reduce disinformation directed through online social media towards children and young people.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to use artificial intelligence and Big Data technologies to help develop critical thinking in young people and the goal of reducing misinformation that targets children and young people through online social media?
How can artificial intelligence technology be used to help educate youth in critical thinking and the ability to identify disinformation?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
To what extent has the scale of disinformation generated with the use of applications available on the Internet based on generative artificial intelligence technology increased?
To what extent has the scale of disinformation generated in online social media increased using applications based on generative artificial intelligence technology available on the Internet?
Many research institutions have included among the main types of threats and risks developing globally in 2023 the question of the increase in the scale of organized disinformation operating in online social media. The diagnosed increase in the scale of disinformation generated in online social media is related to the use of applications available on the Internet based on generative artificial intelligence technology. With the help of applications available on the Internet, it is possible without being a computer graphic designer and even without artistic skills to simply and easily create graphics, drawings, photos, images, videos, animations, etc., which can represent graphically professionally created “works” that can depict fictional events. Then, with the help of other applications equipped with generative artificial intelligence technology and advanced language models, i.e. with the help of intelligent chatbots, text can be created to describe specific “fictional events” depicted in the generated images. Accordingly, since the end of 2022, i.e. since the first such intelligent chatbot, i.e. the first versions of ChatGPT, were made available on the Internet, the number of memes, photos, comments, videos, posts, banners, etc. generated with the help of applications equipped with tools based on artificial intelligence technology has been growing rapidly, including the rapid increase in the scale of disinformation generated in this way. In order to limit the scale of the aforementioned disinformation developing in online media, on the one hand, technology companies running social media portals and other online information services are perfecting tools for identifying posts, entries, comments, banners, photos, videos, animations, etc. that contain specific, usually thematic types of disinformation. However, these solutions are not perfect, and the scales of disinformation operating in internecine social media are still high. On the other hand, specific institutions for combating disinformation are being established, NGOs and schools are conducting educational campaigns to make citizens aware of the high scale of disinformation developing on the Internet. In addition, proposed regulations such as the AIAct, which as a set of regulations on the proper use of tools equipped with artificial intelligence technology is expected to come into force in the next 2 years in the European Union may play an important role in reducing the scale of disinformation developing on the Internet.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent has the scale of disinformation generated in online social media using applications based on generative artificial intelligence technology available on the Internet increased?
To what extent has the scale of disinformation generated using applications based on generative artificial intelligence technology available on the Internet increased?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be referred to as sustainable, pro-climate, pro-environment, green, etc.?
Advanced analytical systems, including complex forecasting models that enable multi-criteria, highly sophisticated, big data and information processing-based forecasts of the development of multi-faceted climatic, natural, social, economic and other processes are increasingly based on new Industry 4.0/5.0 technologies, including Big Data Analytics and machine learning, deep learning and generative artificial intelligence. The use of generative artificial intelligence technologies enables the application of complex data processing algorithms according to precisely defined assumptions and human-defined factors. The use of computerized, integrated business intelligence information systems allows real-time analysis on the basis of continuously updated data provided and the generation of reports, reports, expert opinions in accordance with the defined formulas for such studies. The use of digital twin technology allows computers to build simulations of complex, multi-faceted, prognosticated processes in accordance with defined scenarios of the potential possibility of these processes occurring in the future. In this regard, it is also important to determine the probability of occurrence in the future of several different defined and characterized scenarios of developments, specific processes, phenomena, etc. In this regard, Business Intelligence analytics should also make it possible to precisely determine the level of probability of the occurrence of a certain phenomenon, the operation of a process, the appearance of described effects, including those classified as opportunities and threats to the future development of the situation. Besides, Business Intelligence analytics should enable precise quantitative estimation of the scale of influence of positive and negative effects of the operation of certain processes, as well as factors acting on these processes and determinants conditioning the realization of certain scenarios of situation development. Cloud computing makes it possible, on the one hand, to update the database with new data and information from various institutions, think tanks, research institutes, companies and enterprises operating within a selected sector or industry of the economy, and, on the other hand, to enable simultaneous use of a database updated in this way by many beneficiaries, many business entities and/or, for example, also by many Internet users in a situation where the said database would be made available on the Internet. In a situation where Internet of Things technology is applied, it would be possible to access the said database from the level of various types of devices equipped with Internet access. The application of Blockchain technology makes it possible to increase the scale of cybersecurity of the transfer of data sent to the database and Big Data information as part of the updating of the collected data and as part of the use of the analytical system thus built by external entities. The use of machine learning and/or deep learning technologies in conjunction with artificial neural networks makes it possible to train an AI-based system to perform multi-criteria analysis, build multi-criteria simulation models, etc. in the way a human would. In order for such complex analytical systems that process large amounts of data and information to work efficiently it is a good solution to use state-of-the-art super quantum computers characterized by high computing power to process huge amounts of data in a short time. A center for multi-criteria analysis of large data sets built in this way can occupy quite a large floor space equipped with many servers. Due to the necessary cooling and ventilation system and security considerations, this kind of server room can be built underground. while due to the large amounts of electricity absorbed by this kind of big data analytics center, it is a good solution to build a power plant nearby to supply power to the said data center. If this kind of data analytics center is to be described as sustainable, in line with the trends of sustainable development and green transformation of the economy, so the power plant powering the data analytics center should generate electricity from renewable energy sources, e.g. from photovoltaic panels, windmills and/or other renewable and emission-free energy sources of such a situation, i.e., when a data analytics center that processes multi-criteria Big Data and Big Data Analytics information is powered by renewable and emission-free energy sources then it can be described as sustainable, pro-climate, pro-environment, green, etc. Besides, when the Big Data Analytics analytics center is equipped with advanced generative artificial intelligence technology and is powered by renewable and emission-free energy sources then the AI technology used can also be described as sustainable, pro-climate, pro-environment, green, etc. On the other hand, the Big Data Analytics center can be used to conduct multi-criteria analysis and build multi-faceted simulations of complex climatic, natural, economic, social processes, etc. with the aim of, for example. to develop scenarios of future development of processes observed up to now, to create simulations of continuation in the future of diagnosed historical trends, to develop different variants of scenarios of situation development according to the occurrence of certain determinants, to determine the probability of occurrence of said determinants, to estimate the scale of influence of external factors, the scale of potential materialization of certain categories of risk, the possibility of the occurrence of certain opportunities and threats, estimation of the level of probability of materialization of the various variants of scenarios, in which the potential continuation of the diagnosed trends was characterized for the processes under study, including the processes of sustainable development, green transformation of the economy, implementation of sustainable development goals, etc. Accordingly, the data analytical center built in this way can, on the one hand, be described as sustainable, since it is powered by renewable and emission-free energy sources. In addition to this, the data analytical center can also be helpful in building simulations of complex multi-criteria processes, including the continuation of certain trends of determinants influencing the said processes and the factors co-creating them, which concern the potential development of sustainable processes, e.g. economic, i.e. concerning sustainable economic development. Therefore, the data analytical center built in this way can be helpful, for example, in developing a complex, multifactor simulation of the progressive global warming process in subsequent years, the occurrence in the future of the negative effects of the deepening scale of climate change, the negative impact of these processes on the economy, but also to forecast and develop simulations of the future process of carrying out a pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess to a sustainable, green, zero-carbon zero-growth and closed-loop economy. So, the sustainable data analytical center built in this way will be able to be defined as sustainable due to the supply of renewable and zero-carbon energy sources, but will also be helpful in developing simulations of future processes of green transformation of the economy carried out according to certain assumptions, defined determinants, estimated probability of occurrence of certain impact factors and conditions, etc. orz estimating costs, gains and losses, opportunities and threats, identifying risk factors, particular categories of risks and estimating the feasibility of the defined scenarios of the green transformation of the economy planned to be implemented. In this way, a sustainable data analytical center can also be of great help in the smooth and rapid implementation of the green transformation of the economy.
Kluczowe kwestie dotyczące problematyki zielonej transformacji gospodarki opisałem w poniższym artykule:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
Zastosowania technologii Big Data w analizie sentymentu, analityce biznesowej i zarządzaniu ryzykiem opisałem w artykule mego współautorstwa:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be described as sustainable, pro-climate, pro-environment, green, etc.?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 and RES technologies?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How does generative artificial intelligence technology combined with Big Data Analytics and other Industry 4.0 technologies help in planning and improving production logistics management processes in business entities, companies and enterprises?
Production logistics management in a manufacturing company is currently one of the key areas of business management that significantly affects the level of technical and organizational efficiency of business operations. The change in the level of technical and organizational efficiency of business operations also usually has a significant impact and correlates with the issue of business efficiency and affects the financial results generated in the business entity. Among the key segments of logistics in the enterprise are also internal production logistics, on the way of organization of which the efficiency of the operation of production processes and the efficiency of the enterprise also largely depends. In recent years, more and more companies and enterprises have been optimizing production logistics through the implementation of information systems and automation of individual operations in the process. Production logistics is mainly concerned with ensuring the optimal flow of materials and information in the process of producing all types of goods. Production logistics does not deal with the technology of production processes, but only with the organization of the production system together with the storage and transport environment. Production logistics is mainly concerned with the optimization of all operations related to the production process, such as: supplying the plant with raw materials, semi-finished products and components necessary for production; transporting items between successive stages of production; and transferring the finished product to disposal warehouses. Precisely defining optimal production logistics is a lengthy process, requiring analysis and modification of almost every process taking place in a company. One of the key factors in the optimization of production logistics is the reduction of inventory levels and their adjustment to the ongoing production process. This translates directly into a decrease in storage costs. Effective management of production logistics should ensure timely delivery, while maintaining high product quality. Effective production logistics management can be supported by the implementation of new Industry 4.0/5.0 technologies, including Big Data and generative artificial intelligence.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How does the technology of generative artificial intelligence, combined with Big Data Analytics and other Industry 4.0 technologies, help to plan and improve production logistics management processes in business entities, companies and enterprises?
How does generative artificial intelligence technology help in planning and improving production logistics processes in an enterprise?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Do companies running social media portals consciously shape the general social awareness of citizens, Internet users through the specific information policies applied?
In recent years, there have been an increasing number of examples of situations of deliberate practices in which companies operating social media portals consciously shape the general social awareness of citizens, Internet users through specific information policies applied. The Senate Committees of Inquiry at the U.S. Capitol, which have been taking place for several years, address, among other things, the issue of verifying the use of, for example, algorithms on Facebook platforms that promote certain content, including not only socially positive content, but also socially negative content. The aforementioned algorithms are then changed so that the scale of social negativity is reduced. However, recently there have been an increasing number of similar socially negative cases of algorithms promoting specific political content, e.g. promoting content typical of right-wing political options and limiting the spread of certain social media sites typical of left-wing political content. Thus, these are situations of intentional discrimination against a part of the community of citizens holding certain political views, which the owners of certain companies operating social media portals have deemed to be contrary to the information policy applied in their social media and/or the specific ideology promoted in these media. This type of activity does not correlate with the issue of freedom of speech, unrestricted development of the information society, democracy.
Recently, companies running social media sites have been improving the aforementioned media through the implementation of new Industry 4.0/5.0 technologies, including Big Data Analytics and generative artificial intelligence. The aforementioned technologies can also be used to technically improve the algorithms that control and promote selected content typed and passed on by Internet users, users of the aforementioned online media, which is an important part of shaping information policy in these media.
I have described the issues of the role of information, information security, including business information transferred through social media, and the application of Industry 4.0/5.0 technologies to improve data and information transfer and processing systems in social media in the following articles:
The postpandemic reality and the security of information technologies ICT, Big Data, Industry 4.0, social media portals and the Internet
The Importance and Organization of Business Information Offered to Business Entities in Poland via the Global Internet Network
THE QUESTION OF THE SECURITY OF FACILITATING, COLLECTING AND PROCESSING INFORMATION IN DATA BASES OF SOCIAL NETWORKING
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Do the companies running social media portals consciously shape the general social consciousness of citizens, Internet users through the specific information policies applied?
Do companies running social media portals shape the general social consciousness of citizens through the specific information policies applied?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How to curb the growing scale of disinformation, including social media-generated factoids, deepfakey through the use of generative artificial intelligence technology?
In order to reduce the growing scale of disinformation, including disinformation generated in social media through in the increasing scale of emerging fakenews, deepfakes, disinformation generated through the use of applications available on the Internet based on generative artificial intelligence technology, the just mentioned GAI technology can be used. Constantly improved, taught to carry out new types of activities, tasks and commands, intelligent chatbots and other applications based on generative artificial intelligence technology can be applied to identify instances of disinformation spread primarily in online social media. The aforementioned disinformation is particularly dangerous for children and adolescents, it can significantly affect the world view of the general public's awareness of certain issues, it can affect the formation of development trends of certain social processes, it can affect the results of parliamentary and presidential elections, it can also affect the level of sales of certain types of products and services, and so on. In the absence of a developed institutional system of media control institutions, including the new online media; lack of a developed system of control of the level of objectivity of content directed to citizens in advertising campaigns; lack of consideration of the issue of disinformation analysis by competition and consumer protection institutions; lack of or poorly functioning democracy protection institutions; lack of institutions that reliably take care of a high level of journalistic ethics and media independence, the scale of disinformation of citizens by various groups of influence, including public institutions and commercially operating business entities may be high and may generate high social costs. Accordingly, new technologies of Industry 4.0/5.0, including generative artificial intelligence (GAI) technologies, should be involved in order to reduce the scale of growing disinformation, including the generation of factoids, deepfakes, etc. in social media. The aforementioned GAI technologies can help identify fakenews pseudo-journalistic content, identify photos containing deepfakes, identify factually incorrect content contained in banners, spots and advertising videos published in various media as part of ongoing advertising and promotional campaigns aimed at activating sales of various products and services.
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in an article of my co-authorship:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to curb the growing scale of disinformation, including social media generated factoids, deepfakey through the use of generative artificial intelligence technology?
How to curb disinformation generated in social media using artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Does the application of generative artificial intelligence technology and Big Data Analytics enable the improvement of computerized Business Intelligence business management support systems?
The growing volume of data that is processed in companies and enterprises determines the need to involve specialized software and information systems, thanks to which both the analysis of data will be carried out effectively and the results of the analytics carried out will enable the use of the resulting knowledge to support the management processes of the business entity. The issue of the scope of large amounts of data acquired from various sources, their storage and processing is related to Big Data Analytics technology. However, in order to significantly increase the efficiency of processing large sets of data and information with the use of this type of analytics to support the management process of the business entity, computerized, multi-module business intelligence applications are particularly helpful in this regard. The combination of database technologies and analytical platforms of Big Data Analytics and Business Intelligence type applications makes it possible, on the basis of large data sets containing not fully structured and organized data, to generate useful information for a specific entity, as well as concretized and sublimated substantive knowledge used to support the management process of an organization, institution, business entity, etc. In terms of the key objectives of the application of knowledge generated in this way, it is distinguished to improve the quality of business decisions, reduce the risk of making errors during the processes of managing the organization, improve risk management systems, increase the effectiveness of early warning systems of new threats and development opportunities, etc. Analytics conducted on large data sets and implemented using Big Data Analytics and Business Intelligence applications can help in the processes of carrying out restructuring, developing a new strategy, investment project, marketing plan, business remodeling, etc. Analytics based on Business Intelligence applications can be helpful in the processes of supporting the management of various spheres of business activity of companies and enterprises and thus supporting the operation of various departments, including procurement, production, distribution, sales, marketing communication with customers, relations with business contractors, financial or public institutions. Multi-module business intelligence information systems can operate as integrated information systems or can be one of the key elements of such information systems digitally integrating many different aspects of companies, enterprises or other types of entities. Besides, multi-module complex Business Intelligence information systems can be dedicated to handle and support the implementation of specific business processes at different levels of an organization's organizational structure, i.e. they can consist of modules dedicated to handling for operational employees, departmental managers, managers, but also the board of directors and the company's president. Besides, in connection with the development of deep learning technologies carried out using artificial neural networks and generative artificial intelligence technologies, there are opportunities to increase the scale of automation of analytical processes through the use of the aforementioned technologies. The application of artificial intelligence technologies to analytics carried out using Big Data Analytics and artificial intelligence can significantly increase the efficiency of analytical processes and in terms of supporting organizational management processes, can speed up decision-making processes and reduce the risk of errors. A particularly important attribute of such solutions is the ability to perform predictive analysis and forecasting, so that an entrepreneur can spot certain business and economic patterns in good time and forecast future financial performance and development trends more accurately. Thanks to the use of generative artificial intelligence technology, the functionality and usefulness of analytics based on Big Data Analytics and Business Intelligence class systems is significantly increasing.
In view of the rapid development of applications of generative artificial intelligence technology and its implementation into applications and information systems supporting business management processes, I addressed the Research Gate community of Researchers, Scientists, Friends with the above question.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Does the application of generative artificial intelligence and Big Data Analytics technologies enable the improvement of computerized Business Intelligence support systems for enterprise management processes?
Does the application of artificial intelligence and Big Data Analytics enable the improvement of computerized Business Intelligence systems?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
The neocortex is distinctly different from the cerebellum in that when electrical stimulation is delivered to the neocortex, a detection response related to an evoked sensation is exhibited by subjects (from rodents to cats to primates) but such a response is not apparent following cerebellar stimulation (Bartlet and Doty 1980; Bartlett, Doty et al. 2005; Doty 1965, 1969, 2007; Doty et al. 1980; Koivuniemiand Otto2012; Penfield 1958, 1959, 1975; Penfield and Rasmussen 1952; Rutledge and Doty 1962; Tehovnik and Slocum 2013). Also, when eliciting a detection response from the neocortex, a human subject can describe the sensations produced in detail (Penfield 1958, 1959, 1975; Penfield and Rasmussen 1952). Furthermore, stimulation of the neocortex is such that once a detection response occurs, which can take several days of training, the response is transferred immediately between any site stimulated within a topographic map (Bartlett, Doty et al. 2005; Bartlet and Doty 1980; Doty 1965, 1969; Doty et al. 1980). For example, stimulation of area V1 can be transferred to any region within V1 including contralateral sites, but if the electrode is now moved to V4 there is no transfer until new training has been completed. This lack of transfer has been explained as stimulation of V1 and V4 producing distinctly different sensations of visual consciousness (Bartlett, Doty et al. 2005).
Finally, for areas of the brain that store elements individually devoid of any map such as the temporal or orbital cortex (or the hippocampal formation), no amount of training induces transfer between sites (Doty 1969). The reason for this is that here ‘declarative’ information is stored individually per neuron so that at the time of retrieval the information remains unadulterated and concatenated via connectivity loops (that include the cerebellum, Hasanbegović 2024) to summon a specific stream of consciousness, such as when giving a speech that depends on the elements of the speech as stored in specific locations of the language complex (Ojemann 1991). The storage configuration, which is unique per individual (Ojemann 1991), must depend on how one has learned the language (e.g., whether learned as a first, second, or third language; whether learned at childhood or adulthood; whether learned fully with writing and reading capability; and so on).
Tononi (2008) has argued that the reason consciousness is mediated by the neocortex and not by the cerebellum is that neurons within the neocortex are well connected, whereas those of the cerebellar cortex are not (see Fig. 1). This led Tononi to propose that the more integrated (or connected) the neurons of a brain region, the higher the level of consciousness. Thus, the total number of connected neurons in the neocortex/ telencephalon or a homologue (as it may apply to invertebrates) should affect the caliber of consciousness achieved by a species with the amoeba being ground zero for consciousness, as evidenced by the rudimentary learning and short lifespan of no more than two days by this single-celled animal (Nakagaki et al. 2000; Saigusa et al. 2008).
Figure 1: A model by Tononi (2008) of how information may be differentially integrated via synaptic connections in the neocortex (A), the cerebellar cortex (B), the afferent pathways (C), and the cortico-subcortical loops including the cerebellum (D). The Φ value represents the degree of connectivity to support consciousness, with a value of 0.4 (e.g., between cerebellar modules) indicating low connectivity and a value of 4 (between neocortical neurons) indicating high connectivity. A value of zero would indicate no connectivity. For other information see caption of Figure 4 of Tononi (2008).
How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions that previously employees of these entities enter into ChatGPT?
How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions, which previously employees of these entities enter into ChatGPT or other intelligent chatbots equipped with generative artificial intelligence technology in an attempt to facilitate their work?
Despite the training and updating of internal rules and regulations in many companies and enterprises regarding the proper use of intelligent chatbots, i.e., for example, the ChatGPT made available online by OpenAI and other similar intelligent applications that more technology companies are making available on the Internet, there are still situations where reckless employees enter sensitive data of the companies and enterprises where they are employed into these online tools. In such a situation, there is a high risk that the data and information entered into ChatGPT, Copilot or any other such chatbot may subsequently appear in a reply, an edited report, essay, article, etc. by this application on the smartphone, laptop, computer, etc. of another user of the said chatbot. In this way, another Internet user may accidentally or through a deliberate action of searching for specific data come into possession of particularly important, key, sensitive data for a business entity, public institution or financial institution, which may concern, for example, confidential strategic plans, i.e., information of great value to competitors or intelligence organizations of other countries. This kind of situation has already happened and occurred in some companies characterized by highly recognizable brands in specific markets for the sale of products or services. Such situations clearly indicate that it is necessary to improve internal procedures for data and information protection, improve issues of efficiency of data protection systems, early warning systems informing about the growing risk of loss of key company data, and improve systems for managing the risk of potential leakage of sensitive data and possible cybercriminal attack on internal company information systems. In addition, in parallel to improving the aforementioned systems that ensure a certain level of data and information security, internal regulations should be updated on an ongoing basis according to the scale of the risk, the development of new technologies and their implementation in the business entity, with regard to the issue of correct use by employees of chatbots available on the Internet. In parallel, training should be conducted, during which employees learn about both new opportunities and risks arising from the use of new applications and tools based on generative artificial intelligence technology made available on the Internet. Another solution to this problem may be to order the company to completely ban employees from using smart chatbots made available on the Internet. In such a situation, the company will be forced to create its own, operating as internal such applications and intelligent chatbots, which are not connected to the Internet and operate solely as integral modules of the company's internal information systems. This type of solution will probably involve the company incurring significant financial expenses as a result of creating its own such IT solutions. The costs can be significant and many small companies' financial barrier can be high. However, on the other hand, if the construction of internal IT systems equipped with their own intelligent chatbot solutions becomes an important element of competitive advantage over key direct competitors, the mentioned financial expenses will probably be considered in the category of financial resources allocated to investment and development projects that are important for the future of the company.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions, which employees of these entities previously input into ChatGPT or other intelligent chatbots equipped with generative artificial intelligence technology in an attempt to facilitate their work?
How do you mitigate the risk of leakage of sensitive data of companies, enterprises and institutions that previously employees of these entities enter into ChatGPT?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Can artificial intelligence help improve sentiment analysis of changes in Internet user awareness conducted using Big Data Analytics as relevant additional market research conducted on large amounts of data and information extracted from the pages of many online social media users?
In recent years, more and more companies and enterprises, before launching new product and service offerings as part of their market research, commission sentiment analysis of changes in public sentiment, changes in awareness of the company's brand, recognition of the company's mission and awareness of its offerings to specialized marketing research firms. This kind of sentiment analysis is carried out on computerized Big Data Analytics platforms, where a multi-criteria analytical process is carried out on a large set of data and information taken from multiple websites. In terms of source websites from which data is taken, information is dominated by news portals that publish news and journalistic articles on a specific issue, including the company, enterprise or institution commissioning this type of study. In addition to this, the key sources of online data include the pages of online forums and social media, where Internet users conduct discussions on various topics, including product and service offers of various companies, enterprises, financial or public institutions. In connection with the growing scale of e-commerce, including the sale of various types of products and services on the websites of online stores, online shopping portals, etc., as well as the growing importance of online advertising campaigns and promotional actions carried out on the Internet, the importance of the aforementioned analyses of Internet users' sentiment on specific topics is also growing, as playing a complementary role to other, more traditionally conducted market research. A key problem for this type of sentiment analysis is becoming the rapidly growing volume of data and information contained in posts, comments, posts, banners and advertising spots posted on social media, as well as the constantly emerging new social media. This problem is partly solved by the issue of increasing computing power and multi-criteria processing of large amounts of data thanks to the use of increasingly improved microprocessors and Big Data Analytics platforms. In addition, in recent times, the possibilities of advanced multi-criteria processing of large sets of data and information in increasingly shorter timeframes may significantly increase when generative artificial intelligence technology is involved in the aforementioned data processing.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
The use of Big Data Analytics platforms of ICT information technologies in sentiment analysis for selected issues related to Industry 4.0
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can artificial intelligence help improve sentiment analysis of changes in Internet users' awareness conducted using Big Data Analytics as relevant additional market research conducted on a large amount of data and information extracted from the pages of many online social media users?
Can artificial intelligence help improve sentiment analysis conducted on large data sets and information on Big Data Analytics platforms?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Some believe that the time has come to connect two (or more) brains together to transfer information from one brain to another, much as we do routinely by transferring files between computers. This idea was piloted by hooking up two rats such that unit recordings from the neocortex of one rat performing an operant task was used to trigger neural responses in the neocortex of a second rat by way of electrical stimulation to affect the operant behavior of that rat (Pais-Vieira, Nicolelis et al. 2013). After assessing the information transferred, it was found that less than 0.02 bit per second were communicated between the rats, which explains why the effects observed were barely significant (Tehovnik and Teixeira e Silva 2014). Indeed, the experiments of Pais-Vieira, Nicolelis et al. were inferior to those used by neuroscientists to activate the brain electrically to enhance or perturb the operant behavior of animals (Tehovnik 2024). Furthermore, an information transfer rate of under 0.02 bits per second is many orders of magnitude below that needed to perform language at a rate 40 bits per second which can be done by using word of mouth (Tehovnik and Chen 2015), a type of brain-to-brain transfer with which we are all familiar.
A shortcoming of hooking up two or more brains, i.e., neocortices, to transfer information is that this type of transfer bypasses the body, i.e., the sensors and the muscles (Tehovnik and Chen 2015), even though the aim is to store the information in the activated brain to bring about a registration of new learning by the transfer. By bypassing the inputs and outputs of the body, the amount of control over a subject’s behavior is greatly diminished, as is evidenced by the failure of neocortically localized brain-machine interfaces transferring under 3 bits per second (or under 8 possibilities per second) versus peripherally-localized interfaces such as the cochlear implant transferring up to 10 bits per second or 1,024 possibilities per second (Tehovnik et al. 2013; Tehovnik, Hasanbegović, Chen 2024; Tehovnik and Teixeira e Silva 2014). The high transfer rate by a cochlear implant explains why it has been successful in restoring language function to the hearing impaired. More than half a million patients worldwide support a cochlear implant (NIH Statistics 2019).
To what extent do artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized through Internet marketing, within the framework of social media advertising campaigns?
Among the areas in which applications based on generative artificial intelligence are now rapidly finding application are marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns. More and more advertising agencies are using generative artificial intelligence technology to create images, graphics, animations and videos that are used in advertising campaigns. Thanks to the use of generative artificial intelligence technology, the creation of such key elements of marketing communication materials has become much simpler and cheaper and their creation time has been significantly reduced. On the other hand, thanks to the applications already available on the Internet based on generative artificial intelligence technology that enable the creation of photos, graphics, animations and videos, it is no longer only advertising agencies employing professional cartoonists, graphic designers, screenwriters and filmmakers that can create professional marketing materials and advertising campaigns. Thanks to the aforementioned applications available on the Internet, graphic design platforms, including free smartphone apps offered by technology companies, advertising spots and entire advertising campaigns can be designed, created and executed by Internet users, including online social media users, who have not previously been involved in the creation of graphics, banners, posters, animations and advertising videos. Thus, opportunities are already emerging for Internet users who maintain their social media profiles to professionally create promotional materials and advertising campaigns. On the other hand, generative artificial intelligence technology can be used unethically within the framework of generating disinformation, informational factoids and deepfakes. The significance of this problem, including the growing disinformation on the Internet, has grown rapidly in recent years. The deepfake image processing technique involves combining images of human faces using artificial intelligence techniques.
In order to reduce the scale of disinformation spreading on the Internet media, it is necessary to create a universal system for labeling photos, graphics, animations and videos created using generative artificial intelligence technology. On the other hand, a key factor facilitating the development of this kind of problem of generating disinformation is that many legal issues related to the technology have not yet been regulated. Therefore, it is also necessary to refine legal norms on copyright issues, intellectual property protection that take into account the creation of works that have been created using generative artificial intelligence technology. Besides, social media companies should constantly improve tools for detecting and removing graphic and/or video materials created using deepfake technology.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns?
How do artificial intelligence technology and other Industry 4.0/5.0 technologies support Internet marketing processes?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
My answer: Yes, in order to interpret history, disincentives are the most rigorous guide. How?: Due to the many assumptions of inductive logic, deductive logic is more rigorous. Throughout history, incentives are less rigorous because no entity(besides God) is completely rational and or self-interested, thus what incentivizes an act is less rigorous then what disincentivizes the same action. And, as a heuristic, all entities(besides God) have a finite existence before their energy(eternal consciousness) goes to the afterlife( paraphrased from these sources : 1)
2) )
, thus interpretation through disincentives is more rigorous than interpreting through incentives.
Over 50% of human neocortex is devoted to three main sensory modalities—vision, audition, and somatosensation/proprioception—which are topographic senses (Sereno et al. 2022); olfaction and taste have a lesser representation and they are largely non-topographic (Kandel et al. 2013). Thus, even though the neocortex contains 16 billion neurons with 1.6 x 10^14 synapses (Herculano-Houzel 2009; Tehovnik, Hasanbegović, Chen 2024), at least half of these fibres are involved in the transmission of information, along with its eventual storage at a final destination. Neurons of the parietal, temporal, and fronto-orbital cortices house object information as conveyed by the senses (Brecht and Freiwald 2012; Bruce et al. 1981; Kimura 1993; Ojemann 1991; Penfield and Roberts 1966; Rolls 2004; Schwarzlose et al. 2005). The neurons in these areas are devoid of a topography, which is an attribute of the retrosplenial, lateral intraparietal, infratemporal, and orbital cortices all of which are association areas (Sereno et al. 2022). These areas are important for the integration of information before it is sent to the cerebellum (for further storage and efference-copy updating) and to the motor nuclei for task execution (Schiller and Tehovnik 2015; Tehovnik, Hasanbegović, Chen 2024; Tehovnik, Patel, Tolias et al. 2021).
If the first station for a given sense is ablated in neocortex of humans, then all ability to work with that sense is lost as it pertains to consciousness (Tehovnik, Hasanbegović, Chen 2024). For sensory information to be stored in the neocortex, the primary sensory areas must be intact and therefore cannot rely on subcortical channels to replace this function. For example, when V1 is damaged in human subjects they experience blindsight by utilizing residual subcortical pathways through the superior colliculus, pretectum, and lateral geniculate nucleus to transfer information to extrastriate cortex. Under such conditions human and non-human subjects, including rodents, respond only to high-contrast punctate targets or high-contrast barriers such that a human subject will declare that they are unaware of the visual stimuli, namely, they are only aware of their blindness (Tehovnik, Hasanbegović, Chen 2024; Tehovnik, Patel, Tolias et al. 2021). The same occurs for the other senses, but less work has been done in this regard; somatosensation has been investigated and confirmed to exhibit properties akin to ‘blindsight’ when S1 and S2 are damaged in human subjects (per. com., Jeffry M. Yau, Baylor College of Medicine, 2021).
Based on the recent fMRI work of Vigotsky et al. (2022), consciousness is stored in the association/non-topographic areas of neocortex, such that lesions of just the association areas would be expected to abolish all consciousness, which is normally supported by a continuous flow of declarative information by way of the hippocampus (Corkin 2002). Furthermore, it is the association areas that have priority access to the cerebellum (as verified with resting-state fMRI, as reviewed in Tehovnik, Patel, Tolias et al. 2021) for the long-term storage of consciousness after being converted into executable code so that motor routines can be evoked at the shortest latencies after being triggered by minimal signaling by the neocortex, which we believe is what happens in the generation of express saccades and other automated behaviors (Tehovnik, Hasanbegović, Chen 2024).
**The foregoing is an excerpt from a book we (Tehovnik, Hasanbegović, Chen 2024) are writing entitled ‘Automaticity, Consciousness, and the Transfer of Information’ which explores the relationship of the neocortex and cerebellum from fishes to mammals using Shannon’s information theory. It is notable that at the level of the cerebellum, a similar efference-copy mechanism is operative across vertebrates, so that the transition from consciousness to automaticity can be achieved using a common circuit with a long evolutionary history**
How to build a Big Data Analytics system based on artificial intelligence more perfect than ChatGPT that learns but only real information and data?
How to build a Big Data Analytics system, a Big Data Analytics system, analysing information taken from the Internet, an analytics system based on artificial intelligence conducting real-time analytics, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
Well, ChatGPT is not perfect in terms of self-learning new content and perfecting the answers it gives, because it happens to give confirmation answers when there is information or data that is not factually correct in the question formulated by the Internet user. In this way, ChatGPT can learn new content in the process of learning new but also false information, fictitious data, in the framework of the 'discussions' held. Currently, various technology companies are planning to create, develop and implement computerised analytical systems based on artificial intelligence technology similar to ChatGPT, which will find application in various fields of big data analytics, will find application in various fields of business and research work, in various business entities and institutions operating in different sectors and industries of the economy. One of the directions of development of this kind of artificial intelligence technology and applications of this technology are plans to build a system of analysis of large data sets, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on artificial intelligence conducting analytics in real time, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data. Some of the technology companies are already working on this, i.e. on creating this kind of technological solutions and applications of artificial intelligence technology similar to ChatGPT. But presumably many technology start-ups that plan to create, develop and implement business specific technological innovations based on a specific generation of artificial intelligence technology similar to ChatGPPT are also considering undertaking research in this area and perhaps developing a start-up based on a business concept of which technological innovation 4.0, including the aforementioned artificial intelligence technologies, is a key determinant.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build a Big Data Analytics system, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on Artificial Intelligence conducting real-time analytics, integrated with an Internet search engine, but an Artificial Intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
How should the architecture of an effective computerised platform for detecting fakenews and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
The scale of the development of disinformation on the Internet including, among other things, fakenews has been growing in recent years mainly in social media. Disinformation is mainly developing on social media sites that are popular among young people, children and teenagers. The growing scale of disinformation is particularly socially damaging in view of the key objective of its pursuit by cybercriminals and certain organisations using, for example, the technique of publishing posts and banners using fake profiles of fictitious Internet users containing fakenews. The aim is to try to influence public opinion in society, to shape the general social awareness of citizens, to influence the assessment of the activities of specific policies of the government, national and/or international organisations, public or other institutions, to influence the ratings, credibility, reputation, recognition of specific institutions, companies, enterprises, their product and service offerings, individuals, etc., to influence the results of parliamentary, presidential and other elections, etc. In addition to this, the scale of cybercriminal activity and the improvement of cyber security techniques have also been growing in parallel on the Internet in recent years. Therefore, as part of improving techniques to reduce the scale of disinformation spread deliberately by specific national and/or international organisations, computerised platforms are being built to detect fake news and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies. Since cybercriminals and organisations generating disinformation use new Industry 4.0 technologies in the creation of fake profiles on popular social networks, new information technologies, Industry 4.0, including but not limited to Big Data Analytics, artificial intelligence, deep learning, machine learning, etc., should also be used to reduce the scale of such harmful activities to citizens.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the architecture of an effective computerised platform for detecting factoids and other forms of disinformation on the Internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
And what do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
WHAT IS INFORMATION? WHAT IS ITS CAUSAL (OR NON-CAUSAL?) CORE? A Discussion. Raphael Neelamkavil, Ph.D. (Quantum Causality), Dr. phil. (Gravitational Coalescence Cosmology)
Questions Addressed: What is information? Is it the same as the energy or matter-energy that is basic to it? Is it merely what is being communicated via energy and different from the energy? If it is different, is it causally or non-causally different or a-causally? Is it something purely physical, if it is based on and/or identifiable to energy? What is the symbolic nature of information? How does information get symbolized? Does it have a causal basis and core? If yes, how to systematize it? Can the symbolic aspect of information be systematized? Is information merely the symbolic core being transmitted via energy? If so, how to connect systematically and systemically the causal core and the symbolic core of languages? If language is a symbolizing production based on consciousness and life – both human and other – and if the symbolic aspect may be termed the a-causal but formatively causal core or even periphery of it, can language possess a non-causal aspect-core or merely a causal and an a-causal aspect-cores? If any of these is the case, what are the founding aspects of language and information within consciousness and life? These are the direct questions involved in the present work. I shall address these and the following more general but directly related questions together in the proposed work.
From a general viewpoint, the causal question engenders a multitude of other associated paradoxical questions at the theoretical foundations of the sciences. What are the foundations of all sciences and philosophy together, upon which the concepts of information, language, consciousness which is the origin of language, and the very existent matter-energy processes are based? Are there commonalities between information, language, consciousness, and existent matter-energy processes? Could a grounding of information, language, etc. be helped if their common conceptual base on To Be can be unearthed, and their consciousness-and-life-related and matter-energy-related aspects may be discovered? How to connect them to the causal (or non-causal?) core of all matter-energy? These are questions more foundational than the former set.
Addressing and resolving the foundational question of the apriority of Causality is, in my opinion, the possibly most fundamental solution. Hence, addressing these is the first task. This should be done in such a manner that the rest should follow axiomatically and thus naturally. Hence, the causal question is to be formulated and then the possible ways of reflection of the same in mental concepts that may axiomatically be demonstrated to follow suit. This task appears to be over-ambitious. But I would attempt to demonstrate as rationally as possible that the connections are strongly based on the very implications of To Be. As regards language, I deal only with verbal, nominal, and attributive (adverbs and adjectives) words, because (1) including other parts of speech would go beyond more than double the number of pages and (2) these other parts of speech are much more complicated and hence may be thought through and integrated in the mainline theory here, say, in the course of another decade or more!
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning, deep learning, artificial intelligence, ... what's next? Intelligent thinking autonomous robots?
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to technologies learning machines, deep learning, artificial intelligence. Machine learning, machine learning, machine self-learning or machine learning systems are all synonymous terms relating to the field of artificial intelligence with a particular focus on algorithms that can improve themselves, improving automatically through the action of an experience factor within exposure to large data sets. Algorithms operating within the framework of machine learning build a mathematical model of data processing from sample data, called a learning set, in order to make predictions or decisions without being programmed explicitely by a human to do so. Machine learning algorithms are used in a wide variety of applications, such as spam protection, i.e. filtering internet messages for unwanted correspondence, or image recognition, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. Deep learning is a kind of subcategory of machine learning, which involves the creation of deep neural networks, i.e. networks with multiple levels of neurons. Deep learning techniques are designed to improve, among other things, automatic speech processing, image recognition and natural language processing. The structure of deep neural networks consists of multiple layers of artificial neurons. Simple neural networks can be designed manually so that a specific layer detects specific features and performs specific data processing, while learning consists of setting appropriate weights, significance levels, value system for components of specific issues defined on the basis of processing and learning from large amounts of data. In large neural networks, the deep learning process is automated and self-contained to a certain extent. In this situation, the network is not designed to detect specific features, but detects them on the basis of the processing of appropriately labelled data sets. Both such datasets and the operation of neural networks themselves should be prepared by specialists, but the features are already detected by the programme itself. Therefore, large amounts of data can be processed and the network can automatically learn higher-level feature representations, which means that they can detect complex patterns in the input data. In view of the above, deep learning systems are built on Big Data Analytics platforms built in such a way that the deep learning process is performed on a sufficiently large amount of data. Artificial intelligence, denoted by the acronym AI (artificial intelligence), is respectively the 'intelligent', multi-criteria, advanced, automated processing of complex, large amounts of data carried out in a way that alludes to certain characteristics of human intelligence exhibited by thought processes. As such, it is the intelligence exhibited by artificial devices, including certain advanced ICT and Industry 4.0 information technology systems and devices equipped with these technological solutions. The concept of artificial intelligence is contrasted with the concept of natural intelligence, i.e. that which pertains to humans. In view of the above, artificial intelligence thus has two basic meanings. On the one hand, it is a hypothetical intelligence realised through a technical rather than a natural process. On the other hand, it is the name of a technology and a research field of computer science and cognitive science that also draws on the achievements of psychology, neurology, mathematics and philosophy. In computer science and cognitive science, artificial intelligence also refers to the creation of models and programmes that simulate at least partially intelligent behaviour. Artificial intelligence is also considered in the field of philosophy, within which a theory is developed concerning the philosophy of artificial intelligence. In addition, artificial intelligence is also a subject of interest in the social sciences. The main task of research and development work on the development of artificial intelligence technology and its new applications is the construction of machines and computer programmes capable of performing selected functions analogously to those performed by the human mind functioning with the human senses, including processes that do not lend themselves to numerical algorithmisation. Such problems are sometimes referred to as AI-difficult and include such processes as decision-making in the absence of all data, analysis and synthesis of natural languages, logical reasoning also referred to as rational reasoning, automatic proof of assertions, computer logic games e.g. chess, intelligent robots, expert and diagnostic systems, among others. Artificial intelligence can be developed and improved by integrating it with the areas of machine learning, fuzzy logic, computer vision, evolutionary computing, neural networks, robotics and artificial life. Artificial intelligence (AI) technologies have been developing rapidly in recent years, which is determined by its combination with other Industry 4.0 technologies, the use of microprocessors, digital machines and computing devices characterised by their ever-increasing capacity for multi-criteria processing of ever-increasing amounts of data, and the emergence of new fields of application. Recently, the development of artificial intelligence has become a topic of discussion in various media due to the open-access, automated and AI-enabled solution ChatGPT, with which Internet users can have a kind of conversation. The solution is based and learns from a collection of large amounts of data extracted in 2021 from specific data and information resources on the Internet. The development of artificial intelligence applications is so rapid that it is ahead of the process of adapting regulations to the situation. The new applications being developed do not always generate exclusively positive impacts. These potentially negative effects include the potential for the generation of disinformation on the Internet, information crafted using artificial intelligence, not in line with the facts and disseminated on social media sites. This raises a number of questions regarding the development of artificial intelligence and its new applications, the possibilities that will arise in the future under the next generation of artificial intelligence, the possibility of teaching artificial intelligence to think, i.e. to realise artificial thought processes in a manner analogous or similar to the thought processes realised in the human mind.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
The fourth technological revolution currently taking place is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning technologies, deep learning, artificial intelligence, .... what's next? Intelligent thinking autonomous robots?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from e.g. large numbers of websites and social media sites, cloud computing, satellite analytics etc. and artificial intelligence in joint applications for the construction of integrated analytical platforms, it is possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thus significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
Ongoing technological progress is increasing the technical possibilities of both conducting research, collecting and assembling large amounts of research data and their multi-criteria processing using ICT information technologies and Industry 4.0. Before the development of ICT information technologies, IT tools, personal computers, etc. in the second half of the 20th century as part of the 3rd technological revolution, computerised, semi-automated processing of large data sets was very difficult or impossible. As a result, the building of multi-criteria, multi-article, big data and information models of complex macro-process structures, simulation models, forecasting models was limited or practically impossible. However, the technological advances made in the current fourth technological revolution and the development of Industry 4.0 technology have changed a lot in this regard. The current fourth technological revolution is, among other things, a revolution in the improvement of multi-criteria, computerised analytical techniques based on large data sets. Industry 4.0 technologies, including Big Data Analytics technology, are used in multi-criteria processing, analysing large data sets. Artificial Intelligence (AI) can be useful in terms of scaling up the automation of research processes and multi-faceted processing of big data obtained from research.
The technological advances taking place are contributing to the improvement of computerised analytical techniques conducted on increasingly large data sets. The application of the technologies of the fourth technological revolution, including ICT information technologies and Industry 4.0 in the process of conducting multi-criteria analyses and simulation and forecasting models conducted on large sets of information and data increases the efficiency of research and analytical processes. Increasingly, in research conducted within different scientific disciplines and different fields of knowledge, analytical processes are carried out, among others, using computerised analytical tools including Big Data Analytics in conjunction with other Industry 4.0 technologies.
When these analytical tools are augmented with Internet of Things technology, cloud computing and satellite-implemented sensing and monitoring techniques, opportunities arise for real-time, multi-criteria analytics of large areas, e.g. nature, climate and others, conducted using satellite technology. When machine learning technology, deep learning, artificial intelligence, multi-criteria simulation models, digital twins are added to these analytical and research techniques, opportunities arise for creating predictive simulations for multi-factor, complex macro processes realised in real time. Complex, multi-faceted macro processes, the study of which is facilitated by the application of new ICT information technologies and Industry 4.0, include, on the one hand, multi-factorial natural, climatic, ecological, etc. processes and those concerning changes in the state of the environment, environmental pollution, changes in the state of ecosystems, biodiversity, changes in the state of soils in agricultural fields, changes in the state of moisture in forested areas, environmental monitoring, deforestation of areas, etc. caused by civilisation factors. On the other hand, complex, multifaceted macroprocesses whose research processes are improved by the application of new technologies include economic, social, financial, etc. processes in the context of the functioning of entire economies, economic regions, continents or in global terms.
Year on year, due to technological advances in ICT, including the use of new generations of microprocessors characterised by ever-increasing computing power, the possibilities for increasingly efficient, multi-criteria processing of large collections of data and information are growing. Artificial intelligence can be particularly useful for the selective and precise retrieval of specific, defined types of information and data extracted from many selected types of websites and the real-time transfer and processing of this data in database systems organised in cloud computing on Big Data Analytics platforms, which would be accessed by a system managing a built and updated model of a specific macro-process using digital twin technology. In addition, the use of supercomputers, including quantum computers characterised by particularly large computational capacities for processing very large data sets, can significantly increase the scale of data and information processed within the framework of multi-criteria analyses of natural, climatic, geological, social, economic, etc. macroprocesses taking place and the creation of simulation models concerning them.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is it possible, by combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from, inter alia, a large number of websites and social media portals, cloud computing, satellite analytics, etc., and artificial intelligence in joint applications of building integrated analytical platforms? and artificial intelligence in joint applications for the construction of integrated analytical platforms, is it possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thereby significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence, is it possible to improve the analysis of macroprocesses?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Dariusz Prokopowicz
In your opinion, will the addition of mandatory sustainability reporting according to the European Sustainability Reporting Standards (ESRS) to company and corporate reporting motivate business entities to scale up their sustainability goals?
In your opinion, will the introduction of mandatory enhanced disclosure of sustainability issues help to scale up the implementation of sustainability goals and accelerate the processes of transforming the economy towards a sustainable, green circular economy?
Taking into account the negative aspects of the unsustainable development of the economy, including the over-consumption of natural resources, the increasing scale of environmental pollution, the still high greenhouse gas emissions, the progressing process of global warming, the intensifying negative effects of the climate change taking place, etc., it is necessary to accelerate the processes of carrying out the pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess into a sustainable, green, zero-carbon growth and closed loop economy. One of the key determinants for achieving the aforementioned green transformation of the economy is also the implementation of the Sustainable Development Goals, i.e. according to the UN standard 17 Sustainable Development Goals. In recent years, many companies and enterprises, noticing the growing importance of this issue, including the increasing scale of pro-environmental and pro-climate awareness of citizens, i.e. customers of their offers of companies and enterprises, add to their missions and development strategies the issues of implementation of sustainable development goals and present themselves and their offers of products and services within advertising campaigns and other forms of marketing communication as green, implementing specific sustainable development goals, environmentally and climate friendly, etc. Unfortunately, this is always in accordance with the fact that the implementation of the sustainable development goals is not a fact. Unfortunately, this is not always consistent with the facts. Research shows that in the European Union, the majority of existing companies and enterprises already carry out this type of marketing communication to a greater or lesser extent. However, a significant proportion of businesses that present themselves as green, pursuing specific sustainability goals, environmentally and climate-friendly, and that present their product and service offerings as green, made exclusively from natural raw materials, and produced fully in line with sustainability goals, are doing so unreliably and misleading potential customers. Many companies and businesses are greenwashing. It is therefore necessary to improve systems for verifying what economic operators present about themselves and their offerings in their marketing communications against the facts. By significantly reducing the scale of greenwashing used by many companies, it will be possible to increase the effectiveness of carrying out the process of green transformation of the economy and really increase the scale of achieving the Sustainable Development Goals. Significant instruments to motivate business operators to conduct marketing communications in a reliable way also include extending the scope of business operators' reporting to include sustainability issues. The addition of sustainability reporting obligations for companies and businesses in line with the European Sustainability Reporting Standards (ESRS) should motivate economic actors to scale up their implementation of the Sustainable Development Goals. In November 2022, the Council of the European Union finally approved the Corporate Sustainability Reporting Directive (CSRD). The Directive requires companies to report on sustainability in accordance with the European Sustainability Reporting Standards (ESRS). This means that under the Directive, more than 3,500 companies in Poland will have to disclose sustainability data. The ESRS standards developed by EFRAG (European Financial Reporting Advisory Group) have been submitted to the European Commission and we are currently waiting for their final form in the form of delegated acts. However, this does not mean that companies should not already be looking at the new obligations. Especially if they have not reported on sustainability issues so far, or have done so to a limited extent. Companies will have to disclose sustainability issues in accordance with ESRS standards. It is therefore essential to build systemic reporting standards for business entities enriched with sustainability issues. In a situation where the addition of sustainability reporting obligations in accordance with the European Sustainability Reporting Standards (ESRS) to company and corporate reporting is effectively carried out, there should be an increased incentive for business entities to scale up their sustainability goals. In this regard, the introduction of enhanced disclosure of sustainability issues should help to increase the scale of implementation of the sustainable development goals and accelerate the processes of transformation of the economy towards a sustainable green circular economy.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
In your opinion, will the introduction of mandatory enhanced disclosure of sustainability issues help to scale up the implementation of the Sustainable Development Goals and accelerate the processes of transformation of the economy towards a sustainable, green circular economy?
In your opinion, will the addition of mandatory sustainability reporting to companies and businesses in line with the European Sustainability Reporting Standards (ESRS) motivate business entities to scale up the implementation of the Sustainable Development Goals?
Will the extension of sustainability reporting by business entities motivate companies to scale up their sustainability goals?
What challenges do companies and businesses face in relation to the obligation for expanded disclosure of sustainability issues?
What do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to discussing scientific issues and not the ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Best wishes,
Dariusz Prokopowicz
A fundamental question at artificial intelligence (AI) informatics scientists: Are information and artificial and biological intelligence non-causal, not based on energy?
I am now finalizing a book on this theme. It is theoretically very fundamental to AI and biological intelligence (BI).
I create a system of thought that yields Universal Causality in all sciences and also in AI and BI.
I invite your ideas. I have already uploaded a short document on this in my RG page. Kindly read it and comment here.
The book is supposed to appear at some time after Dec 2023 in English and Italian, and then in Spanish. Will keep you informed.
What, in your opinion, are the negative effects of the low level of economic knowledge of society and what can the low level of economic knowledge of a significant part of citizens in society lead to?
A recent survey shows that only 60 per cent of the public in Poland knows what inflation is, including the awareness that a drop in inflation from a high level means that prices are still rising but more slowly. In Poland, in February 2023, the government-controlled Central Statistical Office showed consumer inflation at 18.4 per cent. Since March, disinflation has been realised. In April 2023, shown by the Central Statistical Office, consumer inflation stood at 14.7 per cent. the most optimistic forecasts of the central bank cooperating informally with the government, i.e. the National Bank of Poland, suggest that Poland's falling inflation may only fall to single-digit levels in December. After deducting international factors, i.e. the prices of energy raw materials, energy and foodstuffs, core inflation, i.e. that determined by internal factors in Poland, still stands at around 12 per cent. The drop in inflation since March has been largely determined by a reduction in the high, until recently excessively high margins and prices of motor fuels by the government-controlled, monopolistically operating, state-owned gas and fuel concern, which holds over 90 per cent of domestic production and sales of motor fuels. These reductions are the result of criticism in the independent media that this government-controlled concern is acting anti-socially, making excessive profits by maintaining increased margins and not reducing the price of motor fuels until early 2023, despite the fact that the prices of energy raw materials, including oil and natural gas, have already fallen to pre-war levels in Ukraine. Citizens can only find out from the government-independent media what is really happening in the economy. Consequently, in the government-controlled meanstream media, including, among others, the government-controlled so-called public television, other media, including independent media, are constantly criticised and informationally harassed. But back to the issue of economic knowledge of the public. Taking into account the media in Poland, it is the media independent from the PIS government that play an important role in increasing economic awareness and knowledge, including objective presentation of events in the economy, objective and consistent with the fundamentals of economics explanation of how economic processes work. The aforementioned research shows that as many as 40 per cent of citizens in Poland still do not know what inflation is, do not fully understand what the successive decrease in inflation consists in. Some of these 40 per cent of the public assume that a fall in inflation, even from a high level, i.e. the disinflation currently taking place, means that the prices of purchased products and services are supposedly falling. The level of economic knowledge is therefore still low and various dishonest economic actors and institutions take advantage of this. The low level of economic knowledge among the public has often been exploited by para-financial companies, which, in their advertising campaigns and in the presentation of their image as banks, have created financial pyramids that have taken money from the public for unreliable deposits. Many citizens lost their life savings in this way. In Poland, this was the case when the authorities overseeing the financial system inadequately informed citizens about the high risk of losing the money they deposited with such para-banking companies and pseudo-investment companies as Kasa Grobelnego and AmberGold. In addition, the low level of economic knowledge in society also makes it easier for unreliable political options to find support among a significant proportion of citizens in society for populist pseudo-economic policy programmes and, on that basis, also to win parliamentary elections, and to conduct economic policy in a way that leads to financial or economic crises after a few years. It is therefore necessary to develop a system of economic education from primary school onwards, but also in the so-called Universities of the Third Age, which are mainly used by senior citizens. This is important because it is seniors who are most exposed to unreliable, misleading publicity campaigns run by money laundering companies. Thanks to the low level of economic knowledge, the government in Poland, through the medium of the controlled meanstream media, persuades a significant part of the population to support a real anti-social, anti-environmental, anti-climatic, financially unsustainable pseudo economic policy, which leads to high indebtedness of the state financial system, to the continuation of financial and economic crises.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What, in your opinion, are the negative consequences of the low level of economic knowledge of society and what can the low level of economic knowledge of a significant part of citizens in society lead to?
What are the negative consequences of the low level of economic knowledge of the public?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to the discussion of scientific issues and not ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Warm regards,
Dariusz Prokopowicz
Is analytics based on Big Data and artificial intelligence already capable of predicting what we will think about tomorrow, that we need something, that we should perhaps buy something we think we need?
Can an AI-equipped internet robot using the results of research carried out by Big Data advanced socio-economic analytics systems and employed in the call centre department of a company or institution already forecast, in real time, the consumption and purchase needs of a specific internet user on the basis of a conversation with a potential customer and, on this basis, offer internet users the purchase of an offer of products or services that they themselves would probably think they need in a moment?
On the basis of analytics of a bank customer's purchases of products and services, analytics of online payments and settlements and bank card payments, will banks refine their models of their customers' purchase preferences for the use of specific banking products and financial services? for example, will the purchase of a certain type of product or service result in an offer of, for example, a specific insurance or bank loan to a specific customer of the bank?
Will this be an important part of the automation of the processes carried out within the computerised systems concerning customer relations etc. in the context of the development of banking in the years to come?
For years, in databases, data warehouses and Big Data platforms, Internet technology companies have been collecting information on citizens, Internet users, customers using their online information services.
Continuous technological progress increases the possibilities of both obtaining, collecting and processing data on citizens in their role as potential customers, consumers of Internet offers and other media, Internet information services, offers of various types of products and services, advertising campaigns that also influence the general social awareness of citizens and the choices people make concerning various aspects of their lives. The new Industry 4.0 technologies currently being developed, including Big Data Analytics, cloud computing, Internet of Things, Blockchain, cyber security, digital twins, augmented reality, virtual reality and also machine learning, deep learning, neural networks and artificial intelligence will determine the rapid technological progress and development of applications of these technologies in the field of online marketing in the years to come as well. The robots being developed, which collect information on specific content from various websites and webpages, are able to pinpoint information written by internet users on their social media profiles. In this way, it is possible to obtain a large amount of information describing a specific Internet user and, on this basis, it is possible to build up a highly accurate characterisation of a specific Internet user and to create multi-faceted characteristics of customer segments for specific product and service offers. In this way, digital avatars of individual Internet users are built in the Big Data databases of Internet technology companies and/or large e-commerce platforms operating on the Internet, social media portals. The descriptive characteristics of such avatars are so detailed and contain so much information about Internet users that most of the people concerned do not even know how much information specific Internet-based technology companies, e-commerce platforms, social media portals, etc. have about them.
Geolocalisation added to 5G high-speed broadband and information technology and Industry 4.0 has, on the one hand, made it possible to develop analytics for identifying Internet users' shopping preferences, topics of interest, etc., depending on where, specifically geographically, they are at any given time with the smartphone on which they are using certain online information services. On the other hand, the combination of the aforementioned technologies in the various applications developed in the applications installed on the smartphone has made it possible, on the one hand, to increase the scale of data collection on Internet users, and, on the other hand, also to increase the efficiency of the processing of this data and its use in the marketing activities of companies and institutions and the implementation of these operations increasingly in real time in the cloud computing, the presentation of the results of the data processing operations carried out on Internet of Things devices, etc.
It is becoming increasingly common for us to experience situations in which, while walking with a smartphone past some physical shop, bank, company or institution offering certain services, we receive an SMS, banner or message on the Internet portal we have just used on our smartphone informing us of a new promotional offer of products or services of that particular shop, company, institution we have passed by.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
Is analytics based on Big Data and artificial intelligence, conducted in the field of market research, market analysis, the creation of characteristics of target customer segments, already able to forecast what we will think about tomorrow, that we need something, that we might need to buy something that we consider necessary?
Is analytics based on Big Data and artificial intelligence already capable of predicting what we will think about tomorrow?
The text above is my own, written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems such as ChatGPT.
Copyright by Dariusz Prokopowicz
What do you think about this topic?
What is your opinion on this subject?
Please answer,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
How can the implementation of artificial intelligence help in terms of the automated process of analysing the sentiment of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysing changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms?
How can the computerised analytics system architecture of Big Data Analytics platforms used to analyse the sentiment of Internet users' social media activity be improved using the new technologies of Industry 4.0, including but not limited to artificial intelligence, deep learning, machine learning, etc.?
In recent years, analytics conducted on large data sets downloaded from multiple websites using Big Data Analytics platforms has been developing rapidly. This type of analysis also includes sentiment analyses of changes in Internet users' opinions on specific topics, issues, opinions on product and service offers, brands of companies, public figures, political parties, etc., based on verification of thousands of posts and comments, answers given in discussions posted on social media sites. With the ever-increasing capabilities in terms of computing power of new generations of microprocessors and the speed of processing data stored on increasingly large digital storage media, the importance of increasing the scale of automation of the processes carried out during the aforementioned sentiment analyses is increasing. Certain new technologies of Industry 4.0, including machine learning, deep learning and artificial intelligence, are coming to the aid of this issue. I am conducting research on the process of sentiment analysis of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysis of changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms. I have included the results of these studies in my articles on this subject. I have also posted these articles after publication on my profile of this Research Gate portal. I would like to invite you to join me in scientific cooperation on this issue.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the implementation of artificial intelligence help in terms of the automated process of analysing the sentiment of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysing changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
Please answer with reasons,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that new startups that are planning to develop implementing innovative business solutions, technological innovations, environmental innovations, energy innovations and other types of innovations?
The economic development of a country is determined by a number of factors, which include the level of innovativeness of economic processes, the creation of new technological solutions in research and development centres, research institutes, laboratories of universities and business entities and their implementation into the economic processes of companies and enterprises. In the modern economy, the level of innovativeness of the economy is also shaped by the effectiveness of innovation policy, which influences the formation of innovative startups and their effective development. The economic activity of innovative startups generates a high investment risk and for the institution financing the development of startups this generates a high credit risk. As a result, many banks do not finance business ventures led by innovative startups. As part of the development of systemic financing programmes for the development of start-ups from national public funds or international innovation support funds, financial grants are organised, which can be provided as non-refundable financial assistance if a startup successfully develops certain business ventures according to the original plan entered in the application for external funding. Non-refundable grant programmes can thus activate the development of innovative business ventures carried out in specific areas, sectors and industries of the economy, including, for example, innovative green business ventures that pursue sustainable development goals and are part of green economy transformation trends. Institutions distributing non-returnable financial grants should constantly improve their systems of analysing the level of innovativeness of business ventures planned to be implemented by startups described in applications for funding as innovative. As part of improving systems for verifying the level of innovativeness of business ventures and the fulfilment of specific set goals, e.g. sustainable development goals, green economy transformation goals, etc., new Industry 4.0 technologies implemented in Business Intelligence analytical platforms can be used. Within the framework of Industry 4.0 technologies, which can be used to improve systems for verifying the level of innovativeness of business ventures, machine learning, deep learning, artificial intelligence (including e.g. ChatGPT), Business Intelligence analytical platforms with implemented Big Data Analytics, cloud computing, multi-criteria simulation models, etc., can be used. In view of the above, in the situation of having at one's disposal appropriate IT equipment, including computers equipped with new generation processors characterised by high computing power, it is possible to use artificial intelligence, e.g. ChatGPT and Big Data Analytics and other Industry 4.0 technologies to analyse the level of innovativeness of new economic projects that plan to develop new start-ups implementing innovative business solutions, technological, ecological, energy and other types of innovations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that plan to develop new startups implementing innovative business solutions, technological innovations, ecological innovations, energy innovations and other types of innovations?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Does analytics based on sentiment analysis of changes in Internet user opinion using Big Data Analytics help detect fakenews spread as part of the deliberate spread of disinformation on social media?
The spread of disinformation on social media used by setting up fake profiles and spreading fakenews on these media is becoming increasingly dangerous in terms of the security of not only specific companies and institutions but also the state. The various social media, including those dominating this segment of new online media, however, differ considerably in this respect. The problem is more acute in the case of those social media which are among the most popular and on which mainly young people function, whose world view can be more easily influenced by factual information and other disinformation techniques used on the Internet. Currently, among children and young people, the most popular social media include Tik Tok, Instagram and YouTube. Consequently, in recent months, the development of some social media sites such as Tik Tok is already being restricted by the governments of some countries by banning the use, installation of this application of this portal on smartphones, laptops and other devices used for official purposes by employees of public institutions. These actions are argued by the governments of these countries in order to maintain a certain level of cyber security and reduce the risk of surveillance, theft of data and sensitive, strategic and particularly security-sensitive information of individual institutions, companies and the state. In addition, there have already been more than a few cases of data leaks on other social media portals, telecoms, public institutions, local authorities and others based on hacking into the databases of specific institutions and companies. In Poland, however, the opposite is true. Not only does the organised political group PIS not restrict the use of Tik Tok by employees of public institutions, but it also motivates the use of this portal by politicians of the ruling PIS option to publish videos as part of the ongoing electoral campaign, which would increase the chances of winning parliamentary elections for the third time in autumn this year 2023. According to analysts researching the problem of growing disinformation on the Internet, in highly developed countries it is enough to create 100 000 avatars, i.e. non-existent fictitious persons, created as it were and seemingly functioning thanks to the Internet by creating profiles of these fictitious persons on social media portals referred to as fake profiles created and functioning on these portals, to seriously influence the world view, the general social awareness of Internet users, i.e. usually the majority of citizens in the country. On the other hand, in third world countries, in countries with undemocratic systems of power, all that is needed for this purpose is about 1,000 avatars of these fictitious people with stories modelled, for example, on famous people such as, in Poland, a well-known singer claiming that there is no pandemic and that vaccines are an instrument for increasing control of citizens by the state. The analysis of changes in the world view of Internet users, changes in trends concerning social opinion on specific issues, evaluations of specific product and service offers, brand recognition of companies and institutions can be conducted on the basis of sentiment analysis of changes in the opinion of Internet users using Big Data Analytics. Consequently, this type of analytics can be applied and of great help in detecting factual news disseminated as part of the deliberate spread of disinformation on social media.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Does analytics based on sentiment analysis of changes in the opinions of Internet users using Big Data Analytics help in detecting fakenews spread as part of the deliberate spread of disinformation on social media?
What is your opinion on this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
We know that knowledge management transcends or goes beyond information management, but what process do you think should be followed so as not to evaluate them separately?
Do new ICT information technologies facilitate the development of scientific collaboration, the development of science?
Do new ICT information technologies facilitate scientific research, the conduct of research activities?
Do new ICT information technologies, internet technologies and/or Industry 4.0 facilitate research?
If so, to what extent, in which areas of your research has this facilitation occurred?
What examples do you know of from your own research and scientific activity that support the claim that new ICT information technologies facilitate research?
What is your opinion on this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Several leading technology companies are currently working on developing smart glasses that will be able to take over many of the functions currently contained in smartphones.
It will no longer be just Augmented Reality, Street View, enabling interactive connection to Smart City systems, Virtual Reality used in online computer games but many other functions of remote communication and information services.
In view of the above, I address the following questions to the esteemed community of researchers and scientists:
Will smart glasses replace smartphones in the next few years?
Or will thin, flexible interactive panels stuck on the hand prove more convenient to use?
What new technological gadget could replace smartphones in the future?
What do you think about this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Greetings,
Dariusz Prokopowicz
Hi everyone,
We'd like to open a huge topic by a literature systematic review. However, the topic is so broad, the initial search on Web of Science only provided us od over 25 000 papers which met our search criteria. (Sure this can be reduced, but only slightly.)
I'd like to explore computer asisted review's possibilities - there must be some software capable of performing an analysis of some sort. Is there anyone who has experience in this field?
Thank you for your thoughts.
Best regards,
Martin