A preview of this full-text is provided by Springer Nature.
Content available from Applied Intelligence
This content is subject to copyright. Terms and conditions apply.
Applied Intelligence (2025) 55:245
https://doi.org/10.1007/s10489-024-06212-4
BHRAM: a knowledge graph embedding model based on bidirectional
and heterogeneous relational attention mechanism
Chaoqun Zhang1,2 ·Wanqiu Li1·Yuanbin Mo1·Weidong Tang1·Haoran Li1·Zhilin Zeng1
Accepted: 15 December 2024 / Published online: 30 December 2024
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024
Abstract
Knowledge graph embedding (KGE) is a method designed to predict missing relations between entities in a knowledge graph
(KG), which has garnered much attention in recent years due to the incompleteness of KGs. However, existing KGE models
have limitations in dealing with heterogeneous KGs and relation direction prediction. To address this issue, a novel KGE model
called BHRAM is proposed. It is based on a bidirectional and heterogeneous relational attention mechanism. Specifically,
BHRAM comprises three primary components, namely entity aggregation, relation aggregation and triplet prediction. The
entity aggregation module divides the adjacency matrix into original and reverse relation adjacency matrices, using graph
convolution to aggregate node features and subsequently form entity embedding representations. The relation aggregation
module leverages bidirectional relations for feature extraction, learns the weight information of diverse paths independently and
generates embedding representations of relation paths through an aggregation function. Finally, the triplet prediction module
utilizes a score function for probabilistic predictions. To validate the superiority of BHRAM, comprehensive experiments
were conducted on four well-known datasets, including baseline comparisons, relation classification tasks and ablation study.
The results demonstrate that BHRAM significantly outperforms the other baselines on the FB15k-237, Kinship and UMLS
datasets, while achieving similar or better performance than the baselines on the WN18RR dataset. These findings indicate
that BHRAM can serve as a robust and effective model for addressing the heterogeneity in KGs.
Keywords Attention mechanism ·Graph convolutional network ·Heterogeneity problem ·Knowledge graph embedding
1 Introduction
Knowledge graph (KG) is a structured semantic knowledge
base [1], initially proposed by Google in 2012 [2]. It has
become a crucial component of artificial intelligence (AI) and
has witnessed the emergence of similar KGs such as WordNet
[3], Dbpedia [4], YAGO [5], Freebase [6] and Wikidata [7].
These KGs have significantly contributed to various intelli-
gence driven applications, including recommender systems
[8], information retrieval [9] and question & answer systems
[10].
BWanqiu Li
lwq20211225@163.com
1College of Artificial Intelligence, Guangxi Minzu University,
Nanning 530006, China
2Guangxi Key Laboratory of Hybrid Computation and IC
Design Analysis, Nanning 530006, China
A KG represents concepts and their interrelations in the
physical world using a symbolic form. It stores knowledge
information in the form of directed graphs, as depicted in
Fig. 1. KGs typically consist of nodes that represent entities
and edges that represent relations between neighboring enti-
ties. This representation is commonly expressed as a triplet
(h,r,t), where hdenotes a head entity, rdenotes a relation
and tdenotes a tail entity. For example, (Christopher Nolan,
directed, Batman Begins) indicates that Christopher Nolan
directed the film Batman Begins.
Despite the vast scale of existing KGs, their inherent
incompleteness persists due to the continuous expansion of
real-world knowledge. Knowledge graph embedding (KGE)
has emerged as a promising solution, embedding entities and
relations into continuous vector spaces to preserve struc-
tural information and predict missing facts [11]. The field
of KGE has witnessed the emergence of various methods,
including translational distance-based methods [12], seman-
tic matching-based methods [13] and neural networks-based
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.