July 2025 – The White House finally catches up to what we’ve been saying all along
The White House has released “America’s AI Action Plan,” a comprehensive strategy to win the global AI race. While we applaud the Administration’s recognition that AI governance is a critical national priority, we can’t help but point out: we’ve been advocating for these exact solutions for years.
The Action Plan acknowledges what we’ve long argued—that the United States faces an unprecedented technological transformation requiring immediate, coordinated action. As President Trump stated in the document: “Breakthroughs in these fields have the potential to reshape the global balance of power, spark entirely new industries, and revolutionize the way we live and work.”
The Plan Gets It Right—But Misses the Critical Piece
The Action Plan’s three pillars—accelerating innovation, building infrastructure, and leading international diplomacy—are sound. The recognition that AI will drive “an industrial revolution, an information revolution, and a renaissance—all at once” mirrors our own urgency about this technological moment.
However, there’s a glaring omission in this 25-page strategy: democratic accountability.
The Plan calls for:
- Removing regulatory barriers to AI development
- Accelerating AI adoption across government agencies
- Building massive AI infrastructure
- Training workers for AI-enabled jobs
- Establishing American AI dominance globally
But who will the American people hold accountable for these sweeping changes?
The Missing Link: Elected Technology Leadership
Every major initiative in the Action Plan—from AI evaluations to cybersecurity to workforce development—will be implemented by appointed bureaucrats and existing agency structures. The Plan mentions Chief AI Officers, AI Consortiums, and interagency coordination councils, but nowhere does it address the fundamental democratic deficit in technology governance.
Consider these critical questions the Action Plan raises but cannot answer:
Who decides which AI systems are “objective and free from ideological bias”? Appointed officials in agencies like NIST and DOC.
Who determines how to balance innovation with security concerns? Unelected experts in the Defense Department and Intelligence Community.
Who chooses which communities get priority for AI infrastructure investment? Federal administrators using discretionary funding guidelines.
Who evaluates whether AI adoption is helping or harming American workers? Department of Labor bureaucrats and their chosen contractors.
Our Framework Provides the Democratic Foundation
The Department of Technology framework we’ve long advocated provides the missing democratic accountability that would make the Action Plan actually work for the American people:
Federal Level: Secretary of Technology
The Action Plan envisions massive federal coordination across DOD, DOC, NSF, DOE, and dozens of other agencies. Instead of this bureaucratic maze, imagine a single elected Secretary of Technology directly accountable to voters for America’s AI strategy.
State Level: Technology Secretaries
The Plan acknowledges that states will play crucial roles in AI regulation and workforce development. Rather than hoping appointed state officials align with federal priorities, elected state Technology Secretaries would ensure local voters have a direct say in how AI transforms their communities.
Local Level: Technology Directors and Supervisors
The Action Plan emphasizes AI’s impact on local infrastructure, education, and services. Elected local technology leaders would ensure these changes serve community needs rather than top-down federal mandates.
Why Democratic Accountability Makes the AI Action Plan Work Better
The Action Plan’s success depends on public trust and adoption. Consider how elected technology leadership would strengthen each pillar:
Pillar I – Accelerate Innovation: Voters could choose leaders who balance innovation with their values on privacy, security, and economic opportunity—rather than having these trade-offs made by appointed experts.
Pillar II – Build Infrastructure: Local communities could elect leaders who ensure AI infrastructure serves local needs, not just national priorities determined in Washington.
Pillar III – International Leadership: A democratically chosen AI strategy would carry more legitimacy internationally than one crafted by unelected bureaucrats.
The Clock Is Still Ticking
The Action Plan correctly identifies the urgency of the AI moment. But urgency without accountability is just technocracy. The Plan asks Americans to trust that appointed experts will make the right decisions about technologies that will reshape every aspect of our lives.
We’ve been arguing for years that this approach is insufficient. The release of this Action Plan—which mirrors many of our policy recommendations while ignoring our core insight about democratic governance—proves our point.
The AI revolution is too important to leave to unelected officials.
A Call to Action
The White House AI Action Plan is a step forward, but it’s incomplete without democratic accountability. Every recommendation in the Plan would be more effective, more legitimate, and more sustainable if implemented through elected Departments of Technology at all levels of government.
We urge:
- Candidates to run on platforms that include technology leadership positions
- Voters to demand direct say in who leads AI policy
- Lawmakers to introduce legislation establishing elected technology departments
- Communities to pilot local technology leadership positions
The Biden-Harris Administration ignored the need for democratic technology governance. The Trump Administration has produced a comprehensive AI strategy but maintained the same accountability gap.
It’s time for voters to demand better.
The future of American AI leadership shouldn’t depend on hoping the right experts are making the right decisions behind closed doors. It should depend on voters choosing leaders who will implement AI policies that reflect community values and priorities.
The AI revolution is here. Democracy needs to catch up. The Department of Technology framework provides the roadmap—now we need the political will to implement it.
The Department of Technology movement has been advocating for elected technology leadership since before it became fashionable in Washington. While we’re pleased to see recognition of AI’s transformative potential, true American leadership requires more than good policy—it requires democratic accountability. Join us in demanding that the AI future be chosen by voters, not bureaucrats.
Critical Flaws in America’s AI Action Plan Without Elected Technology Leadership
Democratic Accountability Gaps
• No voter input on AI policy priorities – All major decisions made by appointed officials with no electoral consequences • Unelected officials determining “objective truth” – Plan calls for AI systems free from “ideological bias” but gives no democratic mechanism for defining objectivity • No public recourse for failed AI policies – Citizens cannot vote out officials responsible for AI governance mistakes • Bureaucratic opacity – Complex interagency coordination with no single elected official accountable to voters • Top-down mandates without local consent – Federal AI initiatives imposed on communities with no local democratic input
Policy Implementation Problems
• Fragmented responsibility across 20+ agencies – No single accountable leader for coherent AI strategy • Conflicting agency priorities – DOD, DOC, DOE, NSF, and others pursuing separate agendas without unified democratic oversight • Bureaucratic turf wars – Multiple Chief AI Officers and councils with overlapping, unclear authorities • Slow adaptation to technological change – Appointed bureaucrats less responsive than elected officials facing regular elections • Policy continuity problems – Strategies change with each administration rather than through democratic processes
Economic and Labor Concerns
• No worker voice in automation decisions – AI deployment affecting jobs decided by unelected officials and corporate interests • Unequal regional AI investment – Federal funding decisions made without local electoral input on community needs • Corporate capture risk – Industry “partnerships” and “consortiums” influencing unaccountable bureaucrats • No democratic oversight of AI workforce displacement – Labor impact assessments conducted by appointed experts, not elected representatives • Taxation without representation in AI economy – AI-driven economic changes imposed without voter approval of governing officials
Infrastructure and Security Flaws
• No local consent for AI infrastructure placement – Data centers and energy projects sited without elected local technology leadership • Undemocratic environmental trade-offs – Streamlined permitting removes local democratic input on environmental impacts • Security decisions behind closed doors – AI security evaluations and incident response controlled by unelected intelligence/defense officials • No public oversight of AI procurement – Government AI contracting decisions made without elected oversight specific to technology • Critical infrastructure vulnerability – AI systems protecting essential services overseen by appointed, not elected, officials
Innovation and Competition Issues
• Regulatory capture by incumbents – “Remove red tape” policies benefit established companies without democratic debate • No voter input on AI development priorities – Open source vs. closed model decisions made by unelected officials • Undemocratic standard-setting – AI evaluation criteria and safety standards developed without elected oversight • Export control decisions without representation – International AI trade policies set by appointed officials • No democratic input on research funding – Billions in AI research dollars allocated without elected technology leadership
International Relations Problems
• Unelected officials representing American AI values – International negotiations conducted without democratically chosen technology leaders • No voter accountability for AI diplomacy failures – Citizens cannot remove officials responsible for losing AI competitiveness • Authoritarian governance model – Centralized, expert-driven approach mirrors Chinese AI governance rather than democratic principles • Alliance decisions without democratic input – AI partnerships with allies decided by appointed officials • Trade-offs between security and openness – Export controls and technology sharing decided without elected oversight
Privacy and Civil Liberties Risks
• Surveillance expansion without electoral consent – AI-enabled monitoring capabilities deployed by unelected officials • No democratic oversight of AI bias mitigation – Fairness and discrimination policies set by appointed bureaucrats • Data collection without voter approval – Government AI systems gathering citizen data without elected oversight • Synthetic media policies imposed top-down – Deepfake and misinformation responses developed without democratic input • Constitutional rights interpretation by bureaucrats – First Amendment and AI issues decided by unelected officials
Implementation and Execution Weaknesses
• No electoral consequences for failure – Officials cannot be voted out if AI initiatives fail or cause harm • Lack of local customization – One-size-fits-all federal approach ignores diverse community needs and preferences • No democratic feedback mechanisms – Plan relies on expert assessment rather than voter evaluation of success • Coordination failures across government levels – No elected officials bridging federal, state, and local AI governance • Missing community trust and buy-in – Public skepticism of unelected experts making life-altering technology decisions
Long-term Governance Concerns
• Democratic erosion through technocracy – Concentrates power in unelected expert class rather than elected representatives • No mechanism for course correction – Policy changes require bureaucratic processes rather than democratic elections • Generational accountability gap – Young people most affected by AI have no direct vote on technology leadership • Special interest influence – Lobbying targets unelected officials with no electoral accountability to broader public • Constitutional questions unresolved – Major technology governance decisions made without clear democratic mandate






Leave a comment