Why Centralizing Artificial Intelligence by Executive Order Will Fail

Why Centralizing Artificial Intelligence by Executive Order Will Fail

Introduction

Artificial intelligence is a general-purpose technology whose impact extends across nearly every sector of the economy. Like electricity or the internet before it, AI evolves through decentralized experimentation—across private firms, universities, open-source communities, and state-level initiatives. That decentralized structure is not accidental; it is the reason innovation occurs at scale and speed.

President Trump’s recent executive order asserting a centralized national framework for artificial intelligence departs sharply from this model. According to public reporting, the order seeks to establish federal primacy over AI governance, directs the Department of Justice to challenge state AI laws, encourages agencies to identify and undermine state regulations deemed inconsistent with federal policy, and conditions federal funding on state compliance. It also asserts that state AI laws may be preempted under the Commerce Clause.

This approach raises serious constitutional and economic concerns. Executive orders exist to implement laws passed by Congress, not to substitute for them. The President does not possess unilateral authority to preempt state law, restructure federal-state relations, or consolidate regulatory control over private innovation. Those powers are legislative in nature and belong to Congress.

Equally problematic, centralized governance of AI contradicts basic economic principles. Innovation in complex systems does not emerge from political direction but from competition, experimentation, and decentralized knowledge. Attempts to impose uniformity through executive action risk slowing innovation, entrenching incumbents, and politicizing technological development.


The question is not whether AI should be governed. It should. The question is whether governance should respect constitutional limits and free-market dynamics—or whether it should be imposed through executive consolidation. The answer will determine not only the success of AI policy, but the integrity of American institutional design.



Executive Authority and Constitutional Limits

The Constitution draws a clear line between legislative and executive power. Congress makes the law. The President executes it. Federal preemption of state law occurs only through congressional statute or direct conflict with valid federal law under the Supremacy Clause. An executive order, standing alone, does not carry preemptive force.

Yet the executive order directs the Department of Justice to challenge state AI laws and conditions federal funding on alignment with an executive-defined national policy. That approach raises immediate federalism concerns. States possess broad police powers to regulate consumer protection, civil rights, labor standards, and public safety—areas increasingly affected by AI systems. Absent congressional action, the executive branch cannot simply displace those powers.

The order’s reliance on the Commerce Clause is equally fragile. While Congress has authority to regulate interstate commerce, the President does not independently wield that power. Courts have never recognized a freestanding executive authority to invalidate state laws on commerce grounds without statutory authorization. Attempting to achieve national uniformity through litigation and funding pressure rather than legislation risks judicial rejection and institutional instability.

Independent agencies further complicate the matter. Many regulatory bodies were intentionally structured by Congress to operate outside direct presidential control. Executive attempts to consolidate authority across these agencies undermine statutory design and weaken the separation of powers.



Economic Reality and the Limits of Centralization

From an economic standpoint, centralized control of AI governance suffers from a fundamental flaw: it assumes that innovation can be directed rather than discovered. Economists have long recognized that knowledge in complex systems is dispersed among millions of actors and cannot be effectively centralized. AI development depends on rapid iteration, localized decision-making, and competitive pressure—conditions incompatible with bureaucratic command.

Centralized frameworks slow innovation by introducing regulatory bottlenecks and politicized incentives. Decision-making shifts from engineers and entrepreneurs to administrators and political appointees, whose incentives are shaped by risk avoidance and electoral considerations rather than performance. Compliance replaces experimentation, and smaller firms are crowded out in favor of large incumbents that can absorb regulatory complexity.

This dynamic also distorts competition. Uniform national rules imposed through executive action tend to favor scale and entrench existing market leaders. Regulation becomes a barrier to entry rather than a neutral boundary, contradicting the principles underlying antitrust law and competitive markets.



Security, Resilience, and Decentralized Innovation

Centralization also undermines resilience. Distributed systems are more robust against cyberattacks, institutional failure, and adversarial exploitation. Concentrating authority and infrastructure creates single points of failure and magnifies the consequences of error or capture.

The United States has historically relied on diversity of suppliers, redundant systems, and competitive innovation to maintain technological leadership. Centralized technological governance forfeits that advantage in exchange for an illusion of control.



Conclusion

Centralized government control over technology and innovation is destined to fail—not because of partisan disagreement, but because it violates foundational principles of law and economics.

Constitutionally, executive orders cannot substitute for legislation. They cannot preempt state law, coerce states into regulatory submission, or consolidate authority that Congress deliberately dispersed. Doing so undermines federalism, separation of powers, and the rule of law.

Economically, centralized innovation is a contradiction. Free markets are not ideological abstractions; they are information systems. Competition, experimentation, and failure transmit signals that no centralized authority can replicate. When government attempts to command innovation rather than govern its boundaries, it suppresses those signals and guarantees inferior outcomes.

History reinforces the lesson. Technological progress flourishes where power is limited, markets are open, and institutions are pluralistic. Centralized regimes innovate more slowly, adapt poorly, and collapse under their own rigidity.

Artificial intelligence will shape the next century of economic and social life. Whether it develops within a constitutional, market-based framework—or under centralized executive control—will determine not only its success, but the future balance between liberty and power.

Centralizing AI by executive order violates free-market principles, exceeds constitutional authority, and ensures failure. That is not a political judgment. It is a structural one.



To view or add a comment, sign in

More articles by Eugene Groysman, MBA, MS, PO/PM

Others also viewed

Explore content categories