Revision History for XPrompt:Explaining Large Language...

Author Coreference Edit by Jinghui Chen

  • 13 Nov 2024, 15:09 Coordinated Universal Time

    Edit Info


    • Content – Author Index: 3
    • Content – Author Id: Jinghui Chen
    Readers: Everyone
    Writers: DBLP
    Signatures: Jinghui Chen

    Author Coreference Edit by Bochuan Cao

    • 07 Oct 2024, 03:37 Coordinated Universal Time
    • Authorids: { replace: { index: 1, value: Bochuan Cao } }

      Edit Info


      • Content – Author Index: 1
      • Content – Author Id: Bochuan Cao
      Readers: Everyone
      Writers: DBLP
      Signatures: Bochuan Cao

      Edit Edit by Uploader

      • 01 Aug 2024, 16:56 Coordinated Universal Time
      • Title: XPrompt:Explaining Large Language Model's Generation via Joint Prompt Attribution
      • Bibtex:
        @article{DBLP:journals/corr/abs-2405-20404,
          publtype={informal},
          author={Yurui Chang and Bochuan Cao and Yujia Wang and Jinghui Chen and Lu Lin},
          title={XPrompt:Explaining Large Language Model's Generation via Joint Prompt Attribution},
          year={2024},
          cdate={1704067200000},
          journal={CoRR},
          volume={abs/2405.20404},
          url={https://doi.org/10.48550/arXiv.2405.20404}
        }
      • Authors: Yurui Chang, Bochuan Cao, Yujia Wang, Jinghui Chen, Lu Lin
      • Authorids: https://dblp.org/search/pid/api?q=author:Yurui_Chang:, https://dblp.org/search/pid/api?q=author:Bochuan_Cao:, https://dblp.org/search/pid/api?q=author:Yujia_Wang:, https://dblp.org/search/pid/api?q=author:Jinghui_Chen:, Lu Lin
      • Venue: CoRR 2024
      • Venueid: dblp.org/journals/CORR/2024
      • Html: hmtl icon
      • Abstract: Large Language Models (LLMs) have demonstrated impressive performances in complex text generation tasks. However, the contribution of the input prompt to the generated content still remains obscure to humans, underscoring the necessity of elucidating and explaining the causality between input and output pairs. Existing works for providing prompt-specific explanation often confine model output to be classification or next-word prediction. Few initial attempts aiming to explain the entire language generation often treat input prompt texts independently, ignoring their combinatorial effects on the follow-up generation. In this study, we introduce a counterfactual explanation framework based on joint prompt attribution, XPrompt, which aims to explain how a few prompt texts collaboratively influences the LLM's complete generation. Particularly, we formulate the task of prompt attribution for generation interpretation as a combinatorial optimization problem, and introduce a probabilistic algorithm to search for the casual input combination in the discrete space. We define and utilize multiple metrics to evaluate the produced explanations, demonstrating both faithfulness and efficiency of our framework.
      • PDF: http://arxiv.org/pdf/2405.20404v1
      • Note – Pdate: 01 Jan 2024, 00:00 Coordinated Universal Time

      Edit Info


      Readers: Everyone
      Writers: DBLP
      Signatures: DBLP Uploader

      Record Edit by Lu Lin

      • 01 Aug 2024, 16:56 Coordinated Universal Time
      • Title: XPrompt:Explaining Large Language Model's Generation via Joint Prompt Attribution.
      • Authors: Yurui Chang, Bochuan Cao, Yujia Wang, Jinghui Chen, Lu Lin 0001
      • Authorids: , , , , Lu Lin
      • Venue: CoRR 2024
      • Venueid: DBLP.org
      • Note – License: CC BY-SA 4.0
      • Note – Signatures: Lu Lin
      • Note – Readers: everyone
      • Note – Writers: ~

      Edit Info


      • Content – Xml:
        Yurui Chang Bochuan Cao Yujia Wang Jinghui Chen Lu Lin 0001 XPrompt:Explaining Large Language Model's Generation via Joint Prompt Attribution. 2024 abs/2405.20404 CoRR https://doi.org/10.48550/arXiv.2405.20404 db/journals/corr/corr2405.html#abs-2405-20404
      Readers: Everyone
      Writers: DBLP Uploader
      Signatures: Lu Lin