Exploring LLMs for Personal Knowledge Graph Population from ConversationDownload PDF

Anonymous

17 Apr 2023ACL ARR 2023 April Blind SubmissionReaders: Everyone
Abstract: Although large language models (LLMs) have made significant advancements, they still lack the ability to personalize responses. However, manually inputting personal information into LLMs can be tedious and may never be completed. Since conversations contain a wealth of personal information, we propose to extract personal information and populate a personal knowledge graph (PKG) from conversation. We explored finetuning and prompting LLMs, but found that they still struggle with generating desired PKGs. Our analysis shows that GPT-3.5 cannot generate knowledge triples with desired relations and T5 often fails to identify the correct subject. Furthermore, GPT-3.5 struggles with extracting in-context subjects, recognizing negation expressions, and differentiating between questions and statements. By highlighting these limitations, we aim to inspire future research on PKG population from conversation and the development of personalized dialogue systems.
Paper Type: short
Research Area: Information Extraction
0 Replies

Loading