Are We Ready to Be Governed by Artificial Intelligence?
Experts Bruce Schneier and Nathan Sanders explore how Artificial Intelligence is already shaping the executive, judicial, and legislative branches, showing that we are already, at least in part, governed by AI—with more likely to come.
Artificial Intelligence (AI) overlords are a common trope in science-fiction dystopias, but the reality looks much more prosaic. The technologies of artificial intelligence are already pervading many aspects of democratic government, affecting our lives in ways both large and small. This has occurred largely without our notice or consent. The result is a government incrementally transformed by AI rather than the singular technological overlord of the big screen.
Let us begin with the executive branch. One of the most important functions of this branch of government is to administer the law, including the human services on which so many Americans rely. Many of these programs have long been operated by a mix of humans and machines, even if not previously using modern AI tools such as Large Language Models.
A salient example is healthcare, where private insurers make widespread use of algorithms to review, approve, and deny coverage, even for recipients of public benefits like Medicare. While Biden-era guidance from the Centers for Medicare and Medicaid Services (CMS) largely blesses this use of AI by Medicare Advantage operators, the practice of overriding the medical care recommendations made by physicians raises profound ethical questions, with life and death implications for about thirty million Americans today.
This April, the Trump administration reversed many administrative guardrails on AI, relieving Medicare Advantage plans from the obligation to avoid AI-enabled patient discrimination. This month, the Trump administration took a step further. CMS rolled out an aggressive new program that financially rewards vendors that leverage AI to reject rapidly prior authorization for “wasteful” physician or provider-requested medical services. The same month, the Trump administration also issued an executive order limiting the abilities of states to put consumer and patient protections around the use of AI.
This shows both growing confidence in AI’s efficiency and a deliberate choice to benefit from it without restricting its possible harms. Critics of the CMS program have characterized it as effectively establishing a bounty on denying care; AI—in this case—is being used to serve a ministerial function in applying that policy. But AI could equally be used to automate a different policy objective, such as minimizing the time required to approve pre-authorizations for necessary services or to minimize the effort required of providers to achieve authorization.
Next up is the judiciary. Setting aside concerns about activist judges and court overreach, jurists are not supposed to decide what law is. The function of judges and courts is to interpret the law written by others. Just as jurists have long turned to dictionaries and expert witnesses for assistance in their interpretation, AI has already emerged as a tool used by judges to infer legislative intent and decide on cases. In 2023, a Colombian judge was the first publicly to use AI to help make a ruling. The first known American federal example came a year later when United States Circuit Judge Kevin Newsom began using AI in his jurisprudence, to provide second “opinions” on the plain language meaning of words in statute. A District of Columbia Court of Appeals similarly used ChatGPT in 2025 to deliver an interpretation of what common knowledge is. And there are more examples from Latin America, the United Kingdom, India, and beyond.
Given that these examples are likely merely the tip of the iceberg, it is also important to remember that any judge can unilaterally choose to consult an AI while drafting his opinions, just as he may choose to consult other human beings, and a judge may be under no obligation to disclose when he does.
This is not necessarily a bad thing. AI has the ability to replace humans but also to augment human capabilities, which may significantly expand human agency. Whether the results are good or otherwise depends on many factors. These include the application and its situation, the characteristics and performance of the AI model, and the characteristics and performance of the humans it augments or replaces. This general model applies to the use of AI in the judiciary.
Each application of AI legitimately needs to be considered in its own context, but certain principles should apply in all uses of AI in democratic contexts. First and foremost, we argue, AI should be applied in ways that decentralize rather than concentrate power. It should be used to empower individual human actors rather than automating the decision-making of a central authority. We are open to independent judges selecting and leveraging AI models as tools in their own jurisprudence, but we remain concerned about Big Tech companies building and operating a dominant AI product that becomes widely used throughout the judiciary.
This principle brings us to the legislature. Policymakers worldwide are already using AI in many aspects of lawmaking. In 2023, the first law written entirely by AI was passed in Brazil. Within a year, the French government had produced its own AI model tailored to help the Parliament with the consideration of amendments. By the end of that year, the use of AI in legislative offices had become widespread enough that twenty percent of state-level staffers in the United States reported using it, and another forty percent were considering it.
These legislative members and staffers, collectively, face a significant choice: to wield AI in a way that concentrates or distributes power. If legislative offices use AI primarily to encode the policy prescriptions of party leadership or powerful interest groups, then they will effectively cede their own power to those central authorities. AI here serves only as a tool enabling that handover.
On the other hand, if legislative offices use AI to amplify their capacity to express and advocate for the policy positions of their principals—the elected representatives—they can strengthen their role in government. Additionally, AI can help them scale their ability to listen to many voices and synthesize input from their constituents, making it a powerful tool for better realizing democracy. We may prefer a legislator who translates his principles into the technical components and legislative language of bills with the aid of a trustworthy AI tool executing under his exclusive control rather than with the aid of lobbyists executing under the control of a corporate patron.
Examples from around the globe demonstrate how legislatures can use AI as tools for tapping into constituent feedback to drive policymaking. The European civic technology organization Make.org is organizing large-scale digital consultations on topics such as European peace and defense. The Scottish Parliament is funding the development of open civic deliberation tools such as Comhairle to help scale civic participation in policymaking. And Japanese Diet member Takahiro Anno and his party Team Mirai are showing how political innovators can build purpose-fit applications of AI to engage with voters.
AI is a power-enhancing technology. Whether it is used by a judge, a legislator, or a government agency, it enhances an entity’s ability to shape the world. This is both its greatest strength and its biggest danger. In the hands of someone who wants more democracy, AI will help that person. In the hands of a society that wants to distribute power, AI can help to execute that. But, in the hands of another person, or another society, bent on centralization, concentration of power, or authoritarianism, it can also be applied toward those ends.
We are not going to be fully governed by AI anytime soon, but we are already being governed with AI—and more is coming. Our challenge in these years is more a social than a technological one: to ensure that those doing the governing are doing so in the service of democracy.
Bruce Schneier is a security technologist and a lecturer at the Harvard Kennedy School, as well as at the Munk School at the University of Toronto. Nathan Sanders is a data scientist affiliated with the Berkman Klein Center at Harvard University. They are the authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.