SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We introduce SWE-Lancer, a benchmark of over 1500 real-world full-stack engineering tasks from Upwork, worth $1 million USD in payouts made to freelance software engineers.
Abstract: We introduce SWE-Lancer, a benchmark of over 1400 freelance software engineering tasks from Upwork, valued at \\\$1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks — ranging from \\\$50 bug fixes to \\\$32000 feature implementations — and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split. By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
Lay Summary: Large language models like ChatGPT are getting better at coding, but could they perform real software engineering work? The SWE-Lancer benchmark aims to measure this. We collected nearly 1500 actual freelance software engineering jobs from Upwork that are worth $1 million USD in total payouts, and measured model performance on these tasks. SWE-Lancer tasks consist of both independent engineering work — like bug fixes and feature implementations — and managerial tasks, where models act as a tech lead and pick the best implementation solution for a given problem. We evaluated multiple state-of-the-art models, and all were unable to solve the majority of tasks, indicating they do not surpass human performance on real-world software engineering work. To facilitate future research, we open-sourced our evaluation code and a public evaluation split so anyone can run SWE-Lancer. By mapping model performance to real dollars, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
Link To Code: https://github.com/openai/SWELancer-Benchmark
Primary Area: General Machine Learning->Evaluation
Keywords: software engineering, benchmark, evals, evaluations, dataset, tasks, real-world, swe, coding, delegation, agents, language models, full-stack engineering
Submission Number: 13597
Loading