# Mildly capable AI deployed at scale creates systemic risk regardless of individual alignment
<div class="pills-container"><span class="pill">Last Updated: April 2026</span></div>
The dominant AI risk frame focuses on highly capable misaligned systems — one powerful agent going wrong deliberately. A distinct and underexamined risk comes from mildly capable AI that is locally aligned but produces harmful emergent behavior when deployed at scale across interconnected systems. No individual system defects, but the aggregate dynamic creates irreversible outcomes.
This is the RAAP (robust agent-agnostic process) problem. It reframes where safety work needs to focus: ensuring individual system alignment is necessary but not sufficient when deployment is widespread and systems interact with each other and with institutions that have their own misaligned incentives. At that point, [[At scale, outer alignment becomes a policy problem|outer alignment becomes a policy problem]], not just a technical one — and [[AI governance needs layered controls that fail independently|single-layer governance]] is especially poorly suited to catching it.