Keywords: ai-augmented software engineering, axiomatic design, abstract thinking, large language models
Abstract: The Abstract Reasoning Corpus (ARC) challenge highlights the persistent gap between current Artificial Intelligence (AI) systems and human-level reasoning, as even the most advanced Large Language Models (LLMs) struggle to match human performance—particularly in abductive reasoning, despite their growing strength in inductive and deductive tasks. This limitation is especially relevant in domains such as software design, where effective system creation requires abstract thinking, abductive hypothesis formation, and deductive synthesis, underscoring the broader challenge of achieving truly human-like reasoning in AI.
This study demonstrates how a systematic design framework, namely axiomatic design, can help mitigate weaknesses in AI-augmented software engineering.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public.
Paper Type: Full-length papers (i.e. case studies, theoretical, applied research papers). 8 pages
Reroute: true
Submission Number: 29
Loading