Abstract: Foundation models, including autoregressive generative models (e.g., Large Language Models and Large Multimodal Models) and generative diffusion models (e.g., Text-to-Image Models and Video Generative Models), are essential tools with broad applications across various domains such as law, medicine, education, finance, and beyond. As these models are increasingly deployed in real-world scenarios, ensuring their reliability and responsibility has become critical for academia, industry, and government. This survey addresses the reliable and responsible development of foundation models. We explore critical issues, including bias and fairness, security and privacy, uncertainty, explainability, and distribution shift. Our research also covers model limitations, such as hallucinations, as well as methods like alignment and Artificial Intelligence-Generated Content (AIGC) detection. For each area, we review the current state of the field and outline concrete future research directions. Additionally, we discuss the intersections between these areas, highlighting their connections and shared challenges. We hope our survey fosters the development of foundation models that are not only powerful but also ethical, trustworthy, reliable, and socially responsible.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Magda_Gregorova2
Submission Number: 4644
Loading