Keywords: Semantic Memory, Large Language models, LLMs, codification, Property Listing Task, PLT
TL;DR: This paper proposed Chat-GPT for codifying Property Listing Tasks, showing that fine-tuned model GPT-3.5-turbo-0125 obtained similar or even better performance than AC-PLT, the current state-of-the-art for this problem.
Abstract: In this paper, we propose Chat-GPT for the codification of a Property Listing Task (PLT). PLTs are a standard method to study semantic memory (understanding how people represent concepts coded in their minds). In a PLT, a group of participants is asked to list properties/features for a concept (e.g., ''horse''). Given that different properties could have the same meaning (e.g., ''quadruped'' and ''four legs''), the mentioned properties must be codified before any analysis. Currently, the codification process is carried out by at least two human coders, making it a slow and non-replicable process (given the variability of codes assigned by the coders). Automating this codification process through Chat-GPT will speed up the codification, reduce the variability of the human codification process, and allow replicable results. We compare Chat-GPT with AC-PLT (the first semi-automatic codification framework for PLTs), using accuracy on two datasets. The experiment compares the AC-PLT framework with GPT-3.5-turbo-0125 (using one-shot prompting and fine-tuning) and GPT-4o (using one-shot prompting). GPT-3.5-turbo-0125 with fine-tuning shows comparable performance with AC-PLT, opening a possible area of research for this codification process.
Submission Number: 5
Loading