July 2024
In the context of continual learning, little attention is dedicated to the problem of developing a layer of "concepts", also known as "concept bottleneck", to support the discrimination of higher-level task information, especially when concepts are not supervised. Concept bottleneck discovery in an unsupervised setting is thus largely unexplored, and this paper aims to move a step forward in such direction. We consider a neural network that faces a stream of binary tasks, with no further information on the relationships among them, i.e., no supervisions at the level of concepts. The learning of the concept bottleneck layer is driven by means of a triplet-based criterion, which is instantiated in conjunction with a specifically designed experience replay (concept replay). Such a novel criterion exploits fuzzy Hamming distances to treat vectors of concept probabilities as fuzzy bitstrings, encouraging different concept activations across different tasks, while also adding a regularization effect which pushes probabilities towards crisp values. Despite the lack of concept supervisions, we found that continually learning the streamed tasks in a progressive manner yields the development of inner concepts that are significantly better correlated with the higher-level tasks, compared to the case of joint-offline learning. This result is showcased in an extended experimental activity involving different architectures and newly created (and shared) datasets that are also well-suited to support further investigation of continual learning in concept-based models.