This paper introduces authority separation as a foundational architectural principle for AI systems in which language models propose actions but do not authorize execution. We demonstrate that separating generation from execution authority provides structural guarantees under defined threat models across four domains: security (prompt injection), epistemics (hallucination), economics (cost-correctness), and safety (irreversible constraint learning). We provide a unified evaluation suite and reference architecture illustrating how authority separation eliminates failure modes that persist under prompt-based approaches.