Op/Ed

Hector Vila: AI governance looming large

HECTOR VILA

Governance has dominated our discourse since at least 2016. We’ve been wrestling with the distribution of rights and responsibilities among different participants; rules and procedures for making decisions; the systems for monitoring and enforcing compliance; and the methods for balancing the interests of different stakeholders. These debates shaped both the 2024 Presidential race and Vermont’s elections.

Technological governance is yet another worry. How will Artificial Intelligence (AI) be used, and to what end? Can we hurt each other? Are tech companies too large and needing regulation, particularly around data protection and privacy?

Let’s imagine governance as the “operating system” of an organization or society, the framework that determines how decisions are made, power is exercised, and stakeholders interact. Now imagine the likes of Elon Musk (Grok, by xAI) setting up an office at Mar-a-Lago; or Sam Altman (Open AI) on bended knee before the president-elect, which did happen; or Mark Zuckerberg (Meta) ending fact-checking, leaving this to users.

From Musk’s efficiency crusade to Altman’s Mar-a-Lago pilgrimage to Silicon Valley’s million-dollar presidential inaugural donations, tech titans are orchestrating a calculated campaign to shape AI policy, national security, and their industry’s future.

Regulating AI is complex; there are many interrelated moving parts: accountability and liability (challenges in attributing causation when AI systems make decisions); safety and risk management (managing risks from increasingly capable AI systems and establishing testing and validation frameworks); rights and fairness (preventing and mitigating algorithmic bias and discrimination, ensuring equitable access to AI benefits across society, and protecting privacy and data rights of individuals); international coordination (developing shared standards and norms across jurisdictions and managing AI arms races between nations); democratic oversight (creating transparency in AI development and deployment and establishing processes for public input and democratic deliberation).

AI governance can, and likely will fundamentally reshape American democracy. Accountability and liability issues with AI decision-making will force our legal system to evolve rapidly, while safety and risk management demands will require unprecedented regulatory agility and new forms of emergency response. The imperative to ensure rights and fairness could transform civil rights protections, privacy laws, and mechanisms for algorithmic oversight, potentially redefining the relationship between citizens and technology. Meanwhile, international coordination needs may revolutionize diplomatic frameworks and trade relationships, as nations navigate AI arms races and competing standards. Perhaps most critically, the demand for democratic oversight could spark innovative models of public engagement and transparency in technical decision-making, fundamentally altering how government agencies operate and how citizens participate in technological governance. These intertwined challenges demand a wholesale reimagining of governance structures, potentially creating new agencies, redefining existing ones, and developing novel processes for managing complex technological challenges while preserving democratic principles.

The traditional approach (xAI/OpenAI/Meta), primarily use Large Language Models (LLMs) trained on massive datasets, focusing on pattern recognition and statistical relationships. While powerful, these systems often operate as “black boxes” and require enormous computing resources and data. But the Active Inference approach, developed by neuroscientist Karl Friston, represents a bridge between neuroscience and AI development and addresses the accountability concerns. It represents a fundamental shift in AI development by modeling how biological brains process information rather than relying on massive data patterns. The method creates smaller, more efficient AI systems that can explain their decisions, adapt continuously, and naturally network with other agents. This matters for three key reasons: the systems are more governable since they can explain their actions, are more efficient since they need less data and computing power, and are more scalable through their ability to network and share knowledge. Together, these qualities offer a path to AI that could be both powerful and inherently aligned with human values and oversight needs.

Friston’s approach of a socio-technical standards framework offers a potential answer to the rapid evolution needed in legal and regulatory systems by providing machine-readable, executable formats for regulations that can adapt quickly to new challenges. Active Inference, a neuroscientific principle, decodes how intelligence manifests in nature and proposes standards that encompass human rights and transparency requirements to help preserve civil rights protections. The traditional approach (xAI/OpenAI/Meta) does not do this. Active Inference can better align with human goals and values while maintaining transparency. These systems reportedly require less data than current AI approaches while offering better explainability. The international coordination challenge is addressed through universal interoperability that potentially prevents fragmented regulatory approaches across nations, most significantly providing a structured environment where democratic oversight can be tested and refined. This vision of AI systems, in opposition to Open AI’s and Meta’s, which is hierarchical (the black box model), can interpret and execute governance requirements while maintaining transparency and accountability. This approach could help preserve democratic principles while enabling the technical agility needed to manage emerging AI capabilities.

To my knowledge, only VERSES, headquartered in Vancouver, Canada, headed by founder and CEO Gabriel René, is the only initiative moving in these directions. If you’ve read this far, then you might learn from your own research about how a cognitive computing company is outperforming the tech titans of AI on all levels (results of their latest testing, namely that Active Inference-based models can achieve comparable or better performance than current state-of-the-art reinforcement learning models using just 90% less data and a fraction of the compute, will be shared at The World Economic Forum Annual Meeting 2025, taking place on Jan. 20-24 in Davos, Switzerland) — that what I say here concerning governance and AI is a scoop for the Addy Indy. None of the weighty popular media (NYT, WAPO, not even the “hot” The Free Press”) has caught on.

Tech oligarchs like Musk, Altman, and Zuckerberg are betting on traditional government patterns — slow-moving, influenced by wealth, and serving the powerful few. Meanwhile, unprecedented challenges like climate disasters show we need nimble, adaptive governance more than ever. Nimble and adaptive institutions is what the future is asking, what AI is telling is now.

Hector Vila is an associate professor of Writing & Rhetoric at Middlebury College.

Share this story:

More News
Education Op/Ed

Editorial: School funding priorities

As Vermonters begin to understand the debate around reforming Vermont’s educational system … (read more)

Op/Ed

Community Forum: Let’s turn heartbreak into action

Middlebury College’s Feb Class of 2024.5 gathered for their graduation ceremony on Feb. 1, … (read more)

Education Op/Ed

Community Forum: Be brave, Little State, defend public education

Governor Phil Scott has provided his shock and aw-shucks proposal for rescuing taxpayers a … (read more)

Share this story: