Black-box language model explanation by context length probing - Université de Montpellier
Communication Dans Un Congrès Année : 2023

Black-box language model explanation by context length probing

Ondřej Cífka
Antoine Liutkus

Résumé

The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present context length probing, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign differential importance scores to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The source code and a demo of the method are available.
Fichier principal
Vignette du fichier
main.pdf (749.85 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence
Copyright (Tous droits réservés)

Dates et versions

hal-03917930 , version 1 (13-11-2023)

Licence

Copyright (Tous droits réservés)

Identifiants

Citer

Ondřej Cífka, Antoine Liutkus. Black-box language model explanation by context length probing. ACL 2023 - 61st Annual Meeting of the Association for Computational Linguistics, Jul 2023, Toronto, Canada. pp.1067--1079, ⟨10.18653/v1/2023.acl-short.92⟩. ⟨hal-03917930⟩
75 Consultations
36 Téléchargements

Altmetric

Partager

More