LLaMA-Annotate—Visualizing Token-Level Confidences for LLMs

Published: 22 Aug 2024, Last Modified: 30 Sept 2024Joint European Conference on Machine Learning and Knowledge Discovery in DatabasesEveryoneCC BY 4.0
Abstract: LLaMA-Annotate is a tool that allows visually inspecting the confidences that a large language model assigns to individual tokens, and the alternative tokens considered for that position. We provide both a simple, non-interactive command-line interface, as well as a more elaborate web application. Besides generally helping to form an intuition about the “thinking” of the LLM, our tool can be used for context-aware spellchecking, or to see how a different prompt or a differently trained LLM can impact the interpretation of a piece of text. The tool can be tried online at https://huggingface.co/spaces/s-t-j/llama-annotate.
Loading