# The AI evidence base is systematically biased toward developers over affected communities <div class="pills-container"><span class="pill">Last Updated: April 2026</span></div> Existing AI research skews toward benchmark performance over societal disruption (benchmarks are easier to measure), toward precedented harms over novel ones (we can only study what's already happened), and toward developers over affected communities (the people funding and publishing AI research are largely the same people deploying AI systems). When regulators defer to "the evidence," they're deferring to a dataset that was never neutral. This doesn't make the evidence worthless — it means the bias has to be accounted for explicitly. It also means that [[Process regulation requires a much lower evidence threshold than substantive regulation|conflating substantive and process regulation]] becomes a way to use the high evidence bar for one to indefinitely block the other.