March 2025
·
6 Reads
International Journal of Semantic Computing
Large language models (LLMs) currently are the state of the art for pre-trained language models. LLMs have been applied to many tasks, including question and answering over Knowledge Graphs (KGs) and text-to-SPARQL, that is, the translation of Natural Language questions to SPARQL queries. With such motivation, this paper first describes preliminary experiments to evaluate the ability of ChatGPT to answer Natural Language questions over KGs. Based on these experiments, the paper introduces Auto-KGQA, an autonomous domain-independent framework based on LLMs for text-to-SPARQL. The framework selects fragments of the KG, which the LLM uses to translate the user’s Natural Language question to a SPARQL query on the KG. The paper describes preliminary experiments with Auto-KGQA with ChatGPT that indicate that the framework substantially reduced the number of tokens passed to ChatGPT without sacrificing performance. Finally, the paper includes an evaluation of Auto-KGQA in a publicly available benchmark, which showed that the framework is competitive, achieving an improvement of 13.2% in accuracy with respect to the state of the art and a reduction of 51.12% in the number of tokens passed to the LLM.