A Comparative Analysis of Conversational Large Language Models in Knowledge-Based Text GenerationDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Generating natural language text from graph-structured data is essential for conversational information seeking. Semantic triples derived from knowledge graphs can serve as a valuable source for grounding responses from conversational agents by providing a factual basis for the information they communicate. This is especially relevant in the context of large language models, which offer great potential for conversational interaction but are prone to hallucinating, omitting, or producing conflicting information. In this study, we conduct an empirical analysis of conversational large language models in generating human-readable text from semantic triples. We compare four large language models of varying sizes with different prompting techniques. Through a series of benchmark experiments, we analyze the models' performance and identify the most common issues in the generated predictions. Our findings demonstrate that the capabilities of large language models in triple verbalization can be significantly improved through few-shot prompting, efficient fine-tuning, and post-processing techniques, particularly for smaller models that exhibit lower zero-shot performance.
Paper Type: short
Research Area: Generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading