Figure 1 - uploaded by Ricardo Aragon
Content may be subject to copyright.
System Architecture. A feed-forward based network (Module B) is stacked on the top of the GAT layer (Module A). The embedding of user-u's scores h (l+1) i , and

System Architecture. A feed-forward based network (Module B) is stacked on the top of the GAT layer (Module A). The embedding of user-u's scores h (l+1) i , and

Source publication
Conference Paper
Full-text available
Recent studies in the context of machine learning have shown the effectiveness of deep attentional mechanisms for identifying important communities and relationships within a given input network. These studies can be effectively applied in those contexts where capturing specific dependencies, while downloading useless content, is essential to take...

Contexts in source publication

Context 1
... attentional mechanism described in this paper was applied as a "base" layer (Module A) for the stacked architecture reported in Fig. 1. Two outputs are provided: h (l+1) ...
Context 2
... then passed and combined through feed forward levels (FFL) in order to obtain, using a final sigmod-based activation, the score predicted for the user/item (input) pair (i, j). The whole model is trained with MSE loss and SGD (stochastic gradient descent) optimizer. In particular the following general architecture (Fig. 1a) was stacked on the top of the attention ...
Context 3
... Stacked layer (Module B in Fig. ...
Context 4
... The models described in this paper was implemented using the Pytorch library (https://pytorch.org/), and then executed using different parameters for early stopping and learning rate, on COLAB (https://colab.research.google.com/). In this work in progress the attention-based model with concatenation operator in the stacked layer (see Fig. 1) was compared with the following alternative models. Performances were averaged on the number of folds (10 ...
Context 5
... attentional mechanism described in this paper was applied as a "base" layer (Module A) for the stacked architecture reported in Fig. 1. Two outputs are provided: h (l+1) ...
Context 6
... then passed and combined through feed forward levels (FFL) in order to obtain, using a final sigmod-based activation, the score predicted for the user/item (input) pair (i, j). The whole model is trained with MSE loss and SGD (stochastic gradient descent) optimizer. In particular the following general architecture (Fig. 1a) was stacked on the top of the attention ...
Context 7
... Stacked layer (Module B in Fig. ...
Context 8
... The models described in this paper was implemented using the Pytorch library (https://pytorch.org/), and then executed using different parameters for early stopping and learning rate, on COLAB (https://colab.research.google.com/). In this work in progress the attention-based model with concatenation operator in the stacked layer (see Fig. 1) was compared with the following alternative models. Performances were averaged on the number of folds (10 ...

Similar publications

Preprint
Full-text available
Due to its simplicity and outstanding ability to generalize, stochastic gradient descent (SGD) is still the most widely used optimization method despite its slow convergence. Meanwhile, adaptive methods have attracted rising attention of optimization and machine learning communities, both for the leverage of life-long information and for the profou...

Citations

... In this article, we provide the first stages of our ongoing research project, aimed at significantly empowering the RS of our educational platform "WhoTeach" [29] by the means of an explainable attention model (XAM). Specifically, we report our current positioning in the state of the art with the proposed model to extend the social engine of "WhoTeach" with a graph attentional mechanism aiming to provide social recommendations for the design of new didactic programs and courses. ...
... Here we report a short review of the numerical experiments described in [29]. ...
Chapter
Learning and training processes are starting to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. AI can be variously exploited for supporting education, though especially deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. In the educational field it could be particularly significant and challenging to understand the reasons behind models outcomes, especially when it comes to suggestions to create, manage or evaluate courses or didactic resources. Deep attentional mechanisms proved to be particularly effective for identifying relevant communities and relationships in any given input network that can be exploited with the aim of improving useful information to interpret the suggested decision process. In this paper we provide the first stages of our ongoing research project, aimed at significantly empowering the recommender system of the educational platform “WhoTeach” by means of explainability, to help teachers or experts to create and manage high-quality courses for personalized learning.