Figure 7 - uploaded by Shachar Mirkin
Content may be subject to copyright.
Performance of various LLMs on subset of tasks from SuperGLUE benchmark in zero-and one-shot prompt-based setting.

Performance of various LLMs on subset of tasks from SuperGLUE benchmark in zero-and one-shot prompt-based setting.

Source publication
Preprint
Full-text available
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technolog...

Context in source publication

Context 1
... mGPT ( Shliazhko et al., 2022), GPT-style models trained on 60 languages from Wikipedia and Common Crawl Figure 7 shows zero-and one-shot performance on SuperGLUE. In both settings, on entailment tasks (BoolQ and CB), performance is well above random chance for BLOOM, T0, OPT, and GPT-J. ...

Similar publications

Article
Full-text available
Classical machine learning algorithms typically operate on unimodal data and hence it can analyze and make predictions based on data from a single source (modality). Whereas multimodal machine learning algorithm, learns from information across multiple modalities, such as text, images, audio, and sensor data. The paper leverages the functionalities...