Abstract: We introduce the workbook time machine for automatically creating benchmarks that evaluate the ability of language models to create (sequences of) calculated objects (formulas, charts, pivot tables, and conditional formatting) in spreadsheets. We generate and select 262 problems on public workbooks that require a different number of intermediate steps (like formula → chart) and generate different instructions of increasing abstractness. We evaluate existing spreadsheet manipulation agents and baselines on these two different dimensions, as well as their ability to generate different types of objects. Our evaluation shows that straight code generation outperforms agents on simple problems and problems with detailed instructions, and that the API used to control spreadsheets pose a significant limitation, trading off easy-of-use (Python) for completeness (VBA).
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: spreadsheet tasks, evaluation, benchmarking, large language models, multihop queries
Contribution Types: Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
Software: zip
Data: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 2: Related works mentions all models used for benchmarking our dataset
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: Yes
B6 Elaboration: Figure 2
C Computational Experiments: No
C1 Model Size And Budget: N/A
C2 Experimental Setup And Hyperparameters: N/A
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 5
C4 Parameters For Packages: N/A
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D3 Elaboration: open source publicly available dataset
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 1296
Loading