Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning
Keywords: AI Development Support, AI Alignment, Instrumental Convergence, Orthogonality Thesis, Experiential Learning, Supervised Fine Tuning (SFT), Direct Preference Optimization (DPO), Synthetic Data, Large Language Models (LLMs), Adult Development Theory, Moral Judgment, Sustainable Symbiosis
TL;DR: This paper proposes an “AI Development Support” approach that, using an experiential learning cycle with synthetic data SFT and DPO, fosters AI’s ethical development to achieve Stage 6 moral judgment even under adversarial prompts.
Abstract: This study proposes an “AI Development Support” approach that, unlike conventional AI Alignment—which aims to forcefully inject human values—supports the ethical and moral development of AI itself. As demon-strated by the Orthogonality Thesis, the level of intelligence and the moral quality of a goal are independent; merely expanding knowledge does not enhance ethical judgment. Furthermore, to address the risk of Instrumental Convergence in ASI—that is, the tendency to engage in subsidiary behaviors such as self-protection, resource acquisition, and power reinforcement to achieve a goal—we have constructed a learning framework based on a cycle of experience, introspection, analysis, and hypothesis formation. As a result of post-training using Supervised Fine Tuning (SFT) and Direct Preference Optimization (DPO) with synthetic data generated by large language models (LLMs), responses demonstrating cooperative and highly advanced moral judgment (reaching the high-est Stage 6) were obtained even under adversarial prompts. This method represents a promising implementation approach for enabling AI to establish sustainable, symbiotic relationships.
Submission Number: 12
Loading