# Process regulation requires a much lower evidence threshold than substantive regulation
<div class="pills-container"><span class="pill">Last Updated: April 2026</span></div>
Substantive regulation — limits on what AI systems can do — makes a direct tradeoff between capability and safety. Getting it wrong in either direction is costly, so the evidence bar is legitimately high. Process regulation — documentation requirements, incident reporting, independent audits — doesn't make that tradeoff. If process regulation turns out to be unnecessary, the downside is bureaucratic overhead.
The argument that "we need more evidence before regulating" is reasonable against the first kind and almost no argument at all against the second. Conflating them is how governance gets delayed indefinitely under the banner of epistemic caution. This matters especially because [[The AI evidence base is systematically biased toward developers over affected communities|the evidence base itself is biased]] toward the parties most likely to benefit from that delay.