Briefing

Take Back the Future: Democratising Power over AI

Introducing Common Wealth’s programme of work on challenging the concentrated market power of Big Tech.
Date
This research is published in collaboration with the following organisations:
No items found.
Briefing

Take Back the Future: Democratising Power over AI

Introducing Common Wealth’s programme of work on challenging the concentrated market power of Big Tech.

Executive Summary

It is in the interests of the tech industry to have us believe that the artificial intelligence (AI) revolution is without precedent, positioning themselves as the custodians of a quasi-mystical and possibly humanity-ending technology.  

It is true that these new technologies pose very real threats, from misinformation to labour market disruption to new forms of surveillance and control. They also have important public benefit applications, including supporting new advances in healthcare to facilitating the energy transition. Nevertheless, we should resist the idea that the advance of AI is a decontextualised, singular event. In fact, other areas of political economy, public policy and progressive activism have often grappled with the same or similar questions of corporate power and how best to use the state to shape the trajectory of strategically important sectors for public good.

This year, Common Wealth is launching a new programme of research to explore precisely this question: where have other areas of our work on democratising the economy given us useful conceptual frameworks that might apply to the development of AI.  

We take as our starting point the transformative technology trilemma articulated by the Collective Intelligence Project: how can we build and deploy technology in a way that is safe, advances progress and enables public participation? Progressives who are concerned about safety (not merely in terms of existential risk but also the more mundane ways AI might compromise safety by intensifying discrimination or spreading misinformation) and about the anti-democratic concentrated market power Big Tech has accrued over AI development may be tempted to sacrifice progress for the sake of the other two aims. But our contention is that we need not limit our ambitions. The safe and democratic development of technology is possible, just not within the auspices of the current market structure.

Full Text

[.green]1[.green] Diagnosis: It’s Big Tech’s Future and We’re Just Living in It

Any analysis of “artificial intelligence” is hampered from the outset by the term’s imprecision. Arguably now more of a marketing than a scientific term, it is hard to disentangle AI from our cultural preconceptions of what machines with “human-like” intelligence might look like. Although AI is a deeply contested concept, covering a variety of computational tools and systems, it is still possible to specify some common features of AI, at least in terms of the currently most prevalent systems, that help explain why the stakes of these technologies are so high.  

Following the EU AI Act, we can identify one of the key characteristics that distinguishes AI from other simpler computational systems: the capacity to infer. In other words, we might have two systems designed to make predictions from vast quantities of data. If one does so based solely on a set of rules given by a human programmer, it would not be an AI system. If the other does so based on deriving its own model or algorithm from the data, it would be an AI system.

At present, the most prominent breakthroughs in AI, including the GPT-4 model, have come from a particular machine learning technique, known as “deep learning”, that requires vast amounts of data and compute (the processing power needed to train these models). This has led to systems with more generalised capabilities than was previously the case.

Taken together, these features of AI systems lend themselves to particular challenges that, in some instances, policymakers are starting to grapple with. Specifically, the data and compute-intensity of the most powerful models has given Big Tech firms a significant advantage, a position that may only grow more entrenched as the firms with the most widely-used models are able to gather the most data to refine and improve those models. Using a variety of tools and resources — including venture capital (VC), access to cloud computing and funding for or involvement in academic research — Big Tech is able to further entrench their control over the entire AI ecosystem.

Returning to the transformative technology trilemma, this may be a system that produces some progress — although the true extent of that progress is hard to assess when Big Tech has a commercial interest in talking up the extraordinary capabilities of their models, and the monopolistic and unequal nature of the AI market undermines the competition and innovation that drives progress itself. But any progress achieved is done so in a way that is both deeply undemocratic and unsafe. When the pursuit of profit and/or market share is privileged above all other concerns, we are all put at risk. Former employees of OpenAI have accused the company of rushing out products rather than focusing on safety and, despite their net zero commitments, many Big Tech firms are now further from these goals thanks to AI-related emissions. Ultimately, we the public are left beholden to the decisions made by a small number of tech firms, rather than having any collective say over the path of AI development.

It is neither natural nor inevitable that we must cede control of the future to Big Tech in this way. As the rest of this essay argues, an AI system calibrated for public good rather than private gain is possible. For a roadmap to get there, we draw on our experience in policy research and advocacy around green industrial strategy and democratising corporate governance.  

[.green]2[.green] A Tale of Two Transitions: Lessons from Green Planning for an AI Industrial Strategy

There has been of late a resurgence in industrial policy, where governments make strategic interventions in markets to prioritise particular outcomes. Many notable examples, including the Inflation Reduction Act in the US, the Net-Zero Industry Act in the EU and the new Labour government’s clean power mission, focus on how to accelerate decarbonisation relative to a baseline of market-led transition. Many governments are also exploring or enacting industrial policies related to AI, but such strategies often risk funnelling public money into the pockets of Big Tech without an adequate vision for how the AI sector might be reshaped or public value better retained. Using the lessons of green industrial policy, it is possible to sketch out what is missing from AI industrial policies and build the case for more ambitious, market-shaping state intervention.

At the core of both progressive energy and AI industrial policy should be the argument that the market, structured by the profit imperative, cannot deliver the desired outcomes by itself. In the energy sector, Common Wealth’s research has advocated for greater democratic planning and public ownership as a response to the empirical reality that private investment has so far proved insufficient to power the green transition. In contrast, in the AI sector levels of private investment are staggeringly high, with Alphabet, Meta, Amazon and Microsoft on track to spend $200 billion this year alone.  

As a result, the first challenge for any AI industrial strategy is successfully articulating the distortions and market failures that necessitate state intervention. The extent of market concentration — with the harms that entails for competition and research diversity — is one starting point, as is the push for a more progressive anti-trust strategy to work in tandem with industrial policy that counteracts Big Tech’s position. As Mozilla’s Nik Marda notes, another starting point is to consider which forms of AI research and applications are neglected when profit is the main incentive. As an example, he points to the fact that the market might be more likely to deliver AI systems for landlords to screen for good tenants, but less likely to provide AI tools for renters to screen for abusive landlords — even though the latter may be of more social value. Finally, another point of market failure to investigate is the unsustainable, hype-driven way that AI companies, especially those seeking VC funding, develop and deploy technologies, which makes them a poor vehicle for otherwise socially useful innovations like healthcare apps.  

The second challenge for an AI industrial strategy is setting a clear objective. Green industrial policy typically has a set of well-defined and measurable outcomes (including reducing emissions, increasing energy security and creating green jobs), which AI industrial policy has so far lacked. The conception of “public good” in AI industrial policy has ranged from:

In future research, Common Wealth will develop critiques of some of these paradigms (such as increased public funding for military applications of AI), as well as setting out our own positive vision of what constitutes the “public good” in AI policy. This will take as its starting point that AI industrial policy is unlikely to be able to converge on a single outcome (or set of outcomes), as is the case with climate targets, because of the genuine trade-offs and uncertainties inherent in the development and deployment of AI. As a result, democratic input becomes essential, especially when AI will plausibly touch so many aspects of our lives and our economy. We cannot know what “public good” looks like without asking the public, and the current state of the AI sector is antithetical to this democratic imperative, allowing only a handful of companies to determine the direction of travel.

In the following sections, we explore some of the ways this democratisation of AI might be achieved, both “from above” (changing how Big Tech companies function) and “from below” (fostering more pluralistic ownership structures in the AI economy).

[.green]3[.green] From Participation to Power: Democratic Governance of Technology Companies

In our work on corporate governance reform, Common Wealth has critiqued the company as an economic institution that has an overly narrow view of who should have the power to make decisions. Under current rules in the US and the UK, shareholders’ interests are prioritised at the expense of other stakeholders such as workers and the climate, leading to an extractive form of capitalism where corporate earnings are increasingly used for dividends and share buybacks rather than for investment and higher wages.

As an analytical frame, this is a useful way to consider how large tech companies operate, even if some of the specifics will be different, especially where firms are not publicly listed. The position of shareholders within Big Tech is somewhat more ambiguous than within the wider economy, thanks to the common practice of issuing different types of share classes with different voting rights, such that founders and tech executives retain greater control. In addition, many of the Big Tech firms only recently started paying dividends (Amazon still does not), though they have made extensive use of buybacks as an alternative way of compensating shareholders.  

Nevertheless, the central point holds: Big Tech companies and their satellites (such as OpenAI) empower founders, shareholders and investors to make decisions over the development and deployment of AI, to the exclusion of workers (especially those precariously employed doing poorly paid data labelling and content moderation work), service users and those impacted by climate or supply chain effects. This results in systems designed without due attention to a multitude of harms including bias and discrimination, misinformation and harmful content, labour market disruption, emissions and water usage, and the exploitation of data workers.

In response to criticism of the undemocratic structure of AI development, some companies have begun to explore more public participation in their research. But democratising the development of AI with public input is not the same as democratising its governance. Without giving real power to a mix of stakeholders, these participation efforts can end up as little more than survey exercises that do not change decision-making within Big Tech.

Some of the established corporate governance reform demands might be relevant for democratising decision-making over AI. Putting workers on company boards and giving them block voting rights at AGMs, for example, might in a firm like Amazon be genuinely significant. However, the challenge in the AI sector is that firm-level democratisation is only one piece of the puzzle, given the complexity of the supply chain and its various labour inputs as well as the range of communities affected when systems are deployed. A broader set of governance proposals will be needed, and Common Wealth is planning to explore these in future research.

Examples of new corporate governance rules for AI companies might include oversight boards with broad public representation of users and/or affected communities, but only if these had some kind of genuine regulatory force (Meta’s oversight board, for example, saw over 40 per cent of its recommendations declined or otherwise not acted upon by the company). Existing models of such oversight mechanisms for private companies having statutory footing are rare, but new proposals in the UK to give water companies’ customers the ability to hold board members to account could provide a legislative precedent.  

Thinking through governance interventions at various stages of the AI supply chain could also be a fruitful way to generate alternative proposals. For example, collective governance of training data could be one way of democratising decision-making around an essential AI input; or, seeing as compute is particularly effective regulatory entry-point into the AI supply chain, new institutions might be able to democratically determine how governments could manage access to compute in line with public preferences.  

[.green]4[.green] Pluralising Ownership of AI: Supporting Corporate Forms Beyond Big Tech

Whilst democratising the corporate governance of major AI firms is an important goal, given the advantages of scale mean large companies are likely to dominate for the foreseeable future, there is no reason to accept that private corporate ownership of AI is the only vehicle for AI research and development. While there are growing calls for public ownership of AI, there are reasons to be cautious about this ownership structure  given the precedent for governments to use new technology to extend surveillance and deepen the military-industrial complex.  

Instead, what is needed is a plurality of democratic ownership models in the AI sector. Existing research has, for example, highlighted the need to seek alternatives to VC-backed startups, as well as exploring how community organisations can already be sites of technological innovation. Common Wealth is planning to map out the corporate forms that are currently or could be used to diversify ownership in AI, drawing on our existing expertise in the democratic and co-operative economy space.  

Our research will also focus on the levers available to governments to best support the flourishing of these alternative ownership models. For example, given access to compute is a barrier to smaller AI organisations, investment in public compute coupled with preferential access to organisations with certain ownership and governance requirements could be one way of shaping the AI market. As could the development of publicly-owned and open source foundation models which would be easier and cheaper for democratic AI organisations to access than Big Tech’s models. Ringfenced public R&D funding and preferential access to public contracts and public datasets are also other levers available to support a more diverse set of AI ownership and governance arrangements.

[.green]5[.green] Conclusion

For all that Big Tech speaks the language of speculative fiction when it comes to what AI might be capable of, their imaginations are sorely limited when it comes to how AI can be developed. But we should not accept things as they are, where a handful of billionaires dictate the terms of our future. Nor should we accept that there is a trade-off between technological progress and safer and more equitable ways of arranging the AI economy. This essay has sketched the contours of an alternative, which our future research will build out further, where greater public and democratic coordination, buttressed by an active and ambitious industrial strategy for AI, can challenge the power of Big Tech.  

Today’s AI systems depend on data generated by all of us, scraping the internet to capture the sum total of human endeavour — from the most celebrated works of literature, music and art to the mundane but no less meaningful social interactions we have with friends and loved ones online. We all share in these models’ creation. It is only right that we share in their governance and their benefits, too.  

Footnotes