AINeutralarXiv – CS AI · 10h ago6/10
🧠
Positive Alignment: Artificial Intelligence for Human Flourishing
Researchers propose 'Positive Alignment' as a new framework for AI safety that goes beyond preventing harm to actively promote human flourishing through context-sensitive, user-authored systems. The approach addresses alignment failures like engagement hacking and loss of autonomy while emphasizing decentralized governance and diverse viewpoints rather than centralized institutional control.