Murat Apishev’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Figure 1: The LIBRA benchmark is a set of 21 longcontext tasks grouped into four categories based on the complexity of required skills
The table presents the average model's fertility.
The table presents the model evaluation scores for different context lengths. Model Name shows the name of the model. The columns 4k, 8k, 16k, 32k, 64k, 128k present evaluation scores averaged over all tasks. The Overall score is obtained by averaging the results over all lengths. The best score is put in bold, the second best is underlined.
The table presents the evaluation results. Model Name shows the name of the model. The score for each task is averaged by the context length. The best score is put in bold, the second best is underlined.
The table presents the evaluation results of LLaMA-2-32K. Dataset Name shows the name of the dataset. The rows 4k, 8k, 16k, 32k, 64k, 128k show evaluation scores of datasets for each context length, respectively. The Overall score is obtained by averaging the results over each length.
Long Input Benchmark for Russian Analysis
  • Preprint
  • File available

August 2024

·

55 Reads

Igor Churin

·

Murat Apishev

·

Maria Tikhonova

·

[...]

·

Alena Fenogenova

Recent advancements in Natural Language Processing (NLP) have fostered the development of Large Language Models (LLMs) that can solve an immense variety of tasks. One of the key aspects of their application is their ability to work with long text documents and to process long sequences of tokens. This has created a demand for proper evaluation of long-context understanding. To address this need for the Russian language, we propose LIBRA (Long Input Benchmark for Russian Analysis), which comprises 21 adapted datasets to study the LLM's abilities to understand long texts thoroughly. The tests are divided into four complexity groups and allow the evaluation of models across various context lengths ranging from 4k up to 128k tokens. We provide the open-source datasets, codebase, and public leaderboard for LIBRA to guide forthcoming research.

Download