# Epistemic resilience outside frontier hubs is the underbuilt foundation of AI safety
<div class="pills-container"><span class="pill">Last Updated: April 2026</span></div>
If institutions outside frontier AI hubs — governments, civil society, non-frontier companies — can't understand AI risks, set their own priorities, and act on them, then the AI safety ecosystem is only as robust as its most connected nodes. That's a single-point-of-failure structure for a problem that requires global coverage.
The theory of change that follows: build recognized roles and mandates for AI safety oversight inside non-frontier institutions, and develop cheap, usable AI safety tools those institutions can run independently. This is what makes [[AI governance failures are usually structural, not technical|structural governance failures]] correctable beyond the frontier — and what makes [[Philippine AI strategy requires starting from specific leverage points|leverage-point thinking]] more than just local optimization. Non-frontier capacity isn't a nice-to-have; it's what keeps [[At scale, outer alignment becomes a policy problem|policy-level outer alignment problems]] from being handled only by the parties with the most conflicts of interest.