Human memory system.

Human memory system.

Source publication

Contexts in source publication

Context 1
... the same time, short-term memory in the preservation of information, if necessary, repeat can also be deposited after long-term memory. In Figure 3, the arrows indicate the flow of information storage in three runs in the direction of the model. ...
Context 2
... Figure 3, rehearsal refers to the psychological process in which an individual repeats the material he has previously memorized through speech in order to consolidate his memory. It is an effective method of short-term memory information storage, which can prevent short-term memory information from being disturbed by irrelevant stimuli and forgetting. ...

Similar publications

Article
Full-text available
Una discussione sul conflitto tra finalità identitarie e cognitive dell'insegnamento della storia - a proposito di "Insegnare l'Italia" di E. Galli della Loggia e L. Perla Pubblicato su www.historialudens.it, sezione "Didattica"
Preprint
Full-text available
A partire dal 2000, il neo-meccanicismo è diventato la filosofia predominante nelle neuroscienze cognitive. Questo contributo introduce i concetti principali di questa filosofia per fornire al lettore gli strumenti indispensabili per affrontare direttamente la letteratura scientifica. L' obiettivo complementare è spiegare cosa fa del neo-meccanicis...

Citations

... To enhance the adaptive capability in dynamic scenarios and minimize manual intervention, cognitive learning (CL) has been proposed [47]. Currently, CL is only utilized in wireless communication for MFI and signal-to-noise ratio (SNR) monitoring in simulation, and it has not been applied to fiber optical communication systems [48]. ...
Article
Full-text available
Nonlinear equalization (NLE) is essential for guaranteeing the performance of an optical network (ON). Effective NLE implementation relies on key parameters of the transmission link, including the modulation format (MF) and the launch power. As ONs become more agile, the parameters of fiber optical transmission need to be adaptive and relevant to the routing condition. Therefore, successful NLE implementation relies on the realization of transmission awareness (TA). Although machine learning-enabled optical performance monitoring (OPM) has been extensively investigated in the past few years, current NLE algorithms cannot autonomously perceive transmission parameters. Furthermore, current TA implementation still needs human intervention to guide the NLE. In addition, existing ML-based OPM and NLE cannot be trained autonomously, leading to the incapability of environmental change and mislabeling. Here, we propose cognitive learning (CL) for TA-guided NLE in agile ONs. We perform an experiment involving 32 Gbaud polarization-division-multiplexed (PDM)-quadrature phase shift keying (QPSK)/16-quadrature amplitude modulation (QAM) transmission over 1500 km of standard single-mode fiber (SSMF) with a variable launch power from 0 to 3 dBm. When a deep neural network (DNN) with amplitude histograms (AHs) as inputs and one step per span-learned digital back-propagation (1stps-LDBP) are developed, the CL simultaneously enables both TA and NLE, with the capability of self-learning, mislabeling resistance, and dynamic adaptation. The proof-of-concept experimental results indicate that both the accuracy of TA and the Q-factor of PDM-16QAM can be improved by 34.8% and 0.84 dB, respectively, when the launch power is 3 dBm. Moreover, the accuracy of TA is enhanced by 35.3%, even when the used data has 30% mislabeling. Therefore, the CL framework can be customized to satisfy various NLE implementations, thereby supporting the adaptive transmission of agile ONs.
... By examining the confusion matrix, lenders can gain a nuanced understanding of the model's performance, including its sensitivity to detecting default-prone borrowers and specificity in avoiding false positives. This information is crucial for risk management, as it enables lenders to make informed decisions about credit risk assessment and forecasting, ultimately reducing the likelihood of financial losses and improving overall portfolio performance [115]. Figure 4 illustrates a confusion matrix under downsampling, which is a visual tool commonly employed to assess the performance of classification algorithms, particularly in supervised learning tasks. ...
... This approach can be efficient in financial risk assessment, where the minority class of default-prone individuals is often small and may not provide sufficient data for accurate modeling. By oversampling the minority class, the model can be trained on a more balanced dataset, reducing the risk of bias and improving its ability to detect and accurately classify default-prone individuals [115]. ...
Article
Full-text available
This study explores how machine learning can optimize financial risk management for non-profit organizations by evaluating various algorithms aimed at mitigating loan default risks. The findings indicate that ensemble learning models, such as random forest and LightGBM, significantly improve prediction accuracy, thereby enabling non-profits to better manage financial risk. In the context of the 2008 subprime mortgage crisis, which underscored the volatility of financial markets, this research assesses a range of risks—credit, operational, liquidity, and market risks—while exploring both traditional machine learning and advanced ensemble techniques, with a particular focus on stacking fusion to enhance model performance. Emphasizing the importance of privacy and adaptive methods, this study advocates for interdisciplinary approaches to overcome limitations such as stress testing, data analysis rule formulation, and regulatory collaboration. The research underscores machine learning’s crucial role in financial risk control and calls on regulatory authorities to reassess existing frameworks to accommodate evolving risks. Additionally, it highlights the need for accurate data type identification and the potential for machine learning to strengthen financial risk management amid uncertainty, promoting interdisciplinary efforts that address broader issues like environmental sustainability and economic development.
... Cognitive machine learning alludes to the blend of AI and brain cognitive system, explicitly, joining the accomplishments of AI. Three examination headings are proposed by creators and in this crisis of learning, integral learning framework and development of learning [16]. Deep learning finds many-sided structure in enormous informational collections by utilizing the backpropagation calculation to demonstrate how a machine should change its interior boundaries that are utilized to process the portrayal in each layer from the portrayal in the past layer. ...
Chapter
Full-text available
By the development of image processing systems, human behavior prediction has been grown for research. Many research outcomes are previously available to recognize human behavior based on different features in form of image sequences. The combination of human and machine learning is very interesting to determine human behavior without much complexity. Nevertheless, there was little have a look at concerning the mixture of presently identified conduct statistics with conduct prediction. The dataset is shaped with the aid of using very own captured pictures accompanied with the aid of using anger, happy, sad, disgust, etc. TensorFlow framework in conjunction with CNN version make this device higher for end result generation. Using numerous length of epoch defines accuracy of prediction and performances of the proposed device. In this work, cognitive neural community is used to make prediction easily and generate higher results.KeywordsHuman behaviorCognitive computingMachine learningConvolutional neural network
... There is increasing interest in applying artificial intelligence in many academic research fields. There are many approaches of artificial intelligence including machine learning and statistical learning, and applied research of artificial intelligence is indeed emerging and increasing (e.g., Chen & Ge, 2019;Shi, 2019;Sirignano & Cont, 2019). ...
... All of these things mean it's possible to quickly and automatically produce models that can analyse bigger, more complex data and deliver faster, more accurate resultseven on a very large scale. And by building precise models, an organization has a better chance of identifying profitable opportunitiesor avoiding unknown risks [1]. ...
Conference Paper
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
... Deep learning is one of the foundations of artificial intelligence (AI), and the current interest in deep learning is due in part to the buzz surrounding AI. Deep learning techniques have improved the ability to classify, recognize, detect and describein one word, understand [1]. For example, deep learning is used to classify images, recognize speech, detect objects and describe content. ...
Conference Paper
Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.
... A PLC imports many types of hydrogen refueling station data; if a simple linear regression algorithm is used, it will consume a lot of time and negatively impact the model's results. Therefore, the stochastic gradient descent method was adopted, the temperature was the prediction target, and the characteristic data were used for data correlation fitting [9]. ...
... This work analyzed the hydrogenation data for the abovementioned hydrogen refueling station, and the hydrogenation process is described, which will be beneficial for matching the weight coefficient of the entire data and improving the accuracy of the algorithm [8,9]. The hydrogen refueling process for the hydrogen refueling station is shown in Figure 1. ...
Article
Full-text available
Hydrogen energy vehicles are being increasingly widely used. To ensure the safety of hydrogenation stations, research into the detection of hydrogen leaks is required. Offline analysis using data machine learning is achieved using Spark SQL and Spark MLlib technology. In this study, to determine the safety status of a hydrogen refueling station, we used multiple algorithm models to perform calculation and analysis: a multi-source data association prediction algorithm, a random gradient descent algorithm, a deep neural network optimization algorithm, and other algorithm models. We successfully analyzed the data, including the potential relationships, internal relationships, and operation laws between the data, to detect the safety statuses of hydrogen refueling stations.
... All of these things mean it's possible to quickly and automatically produce models that can analyse bigger, more complex data and deliver faster, more accurate resultseven on a very large scale. And by building precise models, an organization has a better chance of identifying profitable opportunitiesor avoiding unknown risks [1]. ...
Conference Paper
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studiedand provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
... Use cases of such a solution can be sanitizing the handles and knobs of doors in a hospital corridor or a patient's room. In recent work, a human-support robot uses a deep learning framework in such a case to detect door handles relying on a given dataset and sanitizes them by detecting a specific door handle [40,41]. In order to extend this application area, a neural network can also be trained to detect fire in care homes, raise alarms and also report the incidences of fire outbreak to a fire department or to provide first emergency response, such as by extinguishing a fire. ...
... In this area, a significant challenge that can also arise is the ability of a robot and the neural networks that are responsible for detecting fire with dissimilar features, such as flame, smoke, and heat. This is a particular challenge that can possibly be addressed through cognitive machine learning [41], where an intelligent system can also perceive dissimilar objects by guessing their functions. Moreover, wireless sensor networks in combination with a robot equipped with cameras or a similar set of sensors can be combined to exploit the cognitive features provided by AI and machine learning. ...
... A particular application scenario pertaining to robots in assisted-living environments requires an amalgamation of more than just a single research area. It is discernible from the past, present, and future perspectives within this area, that it is a multi-disciplinary field when a particular use-case is considered [1][2][3][4][19][20][21]25,26,29,31,34,[39][40][41]61,62,64,66]. More specifically, complex robotic systems or intelligent robots in the future, which can operate independently in an assisted living environment, could combine knowledge from AI, machine learning, cognitive machine intelligence, sophisticated robotics, embedded systems, IoT and healthcare engineering. ...
Article
Full-text available
From caretaking activities for elderly people to being assistive in healthcare setup, mobile and non-mobile robots have the potential to be highly applicable and serviceable. The ongoing pandemic has shown that human-to-human contact in healthcare institutions and senior homes must be limited. In this scenario, elderlies and immunocompromised individuals must be exclusively protected. Robots are a promising way to overcome this problem in assisted living environments. In addition, the advent of AI and machine learning will pave a way for intelligent robots with cognitive abilities, while enabling them to be more aware of their surroundings. In this paper, we discuss the general perspectives, potential research opportunities, and challenges arising in the area of robots in assisted living environments and present our research work pertaining to certain application scenarios, i.e., robots in rehabilitation and robots in hospital environments and pandemics, which, in turn, exhibits the growing prospects and interdisciplinary nature of the field of robots in assisted living environment.
... Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labelled data set to guide classification and feature extraction from a larger, unlabelled data set [8]. Semi-supervised learning can solve the problem of having not enough labelled data (or not being able to afford to label enough data) to train a supervised learning algorithm. ...
Conference Paper
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, aswell as the opportunities provided by the AI applications in various decision making domains.