Kathmandu
Tuesday, August 26, 2025

AI Should Help Fund Creative Labor

August 1, 2025
10 MIN READ
A
A+
A-

LONDON – Generative AI models are built on the collective work of countless people. Behind every AI-generated response lurks a vast, invisible workforce – writers, singers, journalists, poets, coders, illustrators, photographers, and filmmakers – whose creations have been used without permission or compensation. These creators have never met, let alone billed, the Silicon Valley titans profiting from their labor.

Unsurprisingly, many are now speaking out and demanding meaningful reform. In October 2024, more than 10,000 actors, musicians, and authors signed a public statement warning that unlicensed use of their work to train generative AI poses a “major, unjust threat” to their livelihoods and insisting it must not be permitted. Within months, the number of signatories had risen to 50,000.

Instead of tightening copyright protections, as many propose, we should treat creative knowledge as a public good and collectively fund its production. Like roads, vaccines, and public broadcasting, it should be accessible to everyone and paid for by everyone.

The economics of the issue are well known. Information often functions as a public good, as it’s difficult to exclude people from accessing it, and the cost of copying has plunged to nearly zero. When a good cannot be easily fenced off, markets tend to fail because people prefer to free-ride on others’ investments rather than pay for access themselves. Given that digital distribution is harder to fence off than traditional media, online information is even more of a public good.

The power of generative AI models like ChatGPT lies in their ability to produce coherent, convincing responses by synthesizing massive amounts of data.

That’s why AI companies scrape all the data they can find, much of it drawn from the public domain. Because this content is often freely accessible online, preventing its collection is extremely difficult.

In fact, some reports suggest that the largest AI models have already consumed almost all of the publicly available information on the internet.

The opaque, black-box nature of AI models makes it virtually impossible to trace specific outputs back to individual inputs, complicating efforts to enforce copyright laws. But even when violations are clear, governments have been reluctant to act, fearing that intervention might stifle innovation – a cardinal sin in contemporary capitalism.

Compounding the problem, AI content now competes directly with the original creators whose work was used to train the models. Some local news outlets have already laid off reporters after adopting automated story generators. Image banks are being inundated with AI-generated artwork, and software firms are increasingly relying on tools like GitHub Copilot to churn out boilerplate code, thereby reducing demand for junior developers.

Many content creators are pushing back. The New York Times is suing OpenAI for ingesting its archives; prominent writers, led by the Authors Guild, have launched a class-action lawsuit, claiming that OpenAI has violated their copyrights; and Disney and Universal are suing image-maker Midjourney for piracy. Similarly, the world’s three largest record labels – Sony, Universal, and Warner Music – have taken AI song generators Suno and Udio to court, accusing them of copying their entire catalogs.

The anger is understandable. In the name of innovation, governments are pouring ever-larger sums into AI developers – often through opaque contracts with few strings attached – while large language models (LLMs) free-ride on unpaid creative labor. The burden is not only economic but environmental: the data centers that power AI systems consume staggering amounts of energy and water, putting additional strain on public infrastructure.

As the sector continues to expand rapidly, these demands are expected to intensify. McKinsey estimates that generative AI could contribute up to $4.4 trillion annually to the global economy.

OpenAI alone is projected to generate more than $12.7 billion in revenue this year (though it remains unprofitable).

At the same time, the very journalists, illustrators, and musicians whose work powers AI models are left scraping by on falling piece rates and shrinking budgets. The US National Endowment for the Arts, for example, will receive just $210 million in government funding in fiscal year 2025, which is roughly 0.003% of the federal budget.

AI Is Eating the Commons

There are two main ways to address this issue: try to “fix” the market or design a public-oriented alternative. The first option involves governments strengthening excludability by erecting digital walls, bolstering intellectual-property rules, and enhancing copyright enforcement. Stronger property rights, the argument goes, would turn creative content into a more private good, curb free-riding, and redirect resources from model developers to creators through royalties.

Some platforms and publishers have already tried this approach in an effort to monetize access. Reddit, for example, licenses its vast archive of user comments to Google, while the Guardian, the Associated Press, and Shutterstock have all reached agreements with OpenAI, allowing the company to train its models on their content.

But in most cases, the entities licensing the data are not the original creators, and they often wield disproportionate power over those who are. Musicians on Spotify and video creators on YouTube illustrate the problem: even when platforms pay, only a small fraction of the revenue reaches the artists.

The real competition in today’s AI sector isn’t about improving services. Instead, it’s about commandeering user attention through algorithmic manipulation and extracting what we call algorithmic attention rents – a tax paid not in cash, but in cognitive resources. Stricter copyright enforcement risks entrenching a form of digital feudalism, enabling dominant platforms to cordon off huge swaths of online content while extracting value from the creators who actually produce it.

Moreover, applying the licensing approach to the entire universe of material on which LLMs rely is simply unfeasible. Markets function only when transaction costs are low relative to the value exchanged.

But when the potential rights-holders include millions of scattered writers, photographers, coders, and hobbyists, transaction costs balloon.

While a pay-per-byte approach might work for large content owners like Reddit, no legal, bureaucratic, or algorithmic system could realistically set a bespoke price for every fragment of the terabytes of text, code, images, and audio consumed by AI models with trillions of parameters.

Worse still, imposing such barriers would stifle innovation and marginalize smaller firms, students, and independent creators. The result would be a gated intellectual landscape that enriches a privileged few while starving the very creativity it purports to protect.

Markets, however, are just one method of allocating resources, and sometimes they just don’t work. In such cases – especially when it’s difficult to exclude non-payers or transaction costs are prohibitively high – alternative arrangements are not only justified but necessary.

The Public Alternative

Rather than patching up a market ill-suited to managing public goods, governments should actively nurture the cultural commons by steering innovation toward public purposes. Just as we pool taxes to fund streetlights, law enforcement, and basic research, the production of creative content in the era of generative AI should be publicly supported, and its outputs kept in the public domain. In short, the state must be entrepreneurial.

This idea is not new. The BBC license fee, France’s National Center for Cinema, and even US states’ film-production tax credits have long supported major global hits, from the BBC documentary Blue Planet II – which was reportedly watched by 80 million people in China and temporarily slowed the country’s internet – to the ABC/BBC cartoon Bluey, which became the most-streamed program in the United States in 2024.

Above all, the public model generates immense value. It provides creators with stable funding, fosters innovation aimed at citizens rather than advertisers, and enables artistic risk-taking and experimentation. It also helps preserve shared cultural heritage, in turn enriching education, strengthening social bonds, and galvanizing democratic debate in ways that market-driven models rarely do. Of course, the case for such an approach extends far beyond broadcasting and cinema and applies to all forms of art, media, and creative expression.

Because generative AI models are trained on human-created content, the value of art to society takes on a new dimension. By increasing the volume and diversity of creative output, these technologies amplify the reach and impact of human creativity. One could argue that by repurposing creative works, AI has expanded the art multiplier: each dollar spent on the arts now yields its usual social return, as well as additional value derived from its incorporation into AI systems.

Yet, despite the value of their contributions, public funding for artists and creators has steadily declined. In the United Kingdom, for example, direct support from the Department for Culture, Media and Sport to national arts bodies fell by 18% per person in real terms between 2009-10 and 2022-23.

Over the same period, core funding for arts councils dropped by 18% in England, 22% in Scotland, 25% in Wales, and 66% in Northern Ireland. As generative AI continues to churn out synthetic content and displace human labor, that support must increase to reflect the realities of a changing creative economy.

Admittedly, with public finances under pressure and debt on the rise, this is hardly the time for unchecked government spending. Any additional funding would need to be financed responsibly. While a detailed policy blueprint is beyond the scope of this article, it’s worth noting that the enormous profits generated by major tech firms could be partially redirected to support the creative communities that power their models.

One way to achieve this would be to impose a levy on the gross revenues of the largest AI providers, collected by a national or multilateral agency. As the technology becomes increasingly embedded in daily life and production processes, the revenue flowing to AI firms is bound to grow – and so, too, will contributions to the fund. These resources could then be distributed by independent grant councils on multiyear cycles, ensuring that support reaches a wide range of disciplines and regions.

Such an arrangement would align incentives across the board. Model developers would retain frictionless access to the vast pool of publicly available knowledge and art without the legal morass of micro-licensing. Creators would gain a stable income stream, decoupled from advertising markets and opaque platform algorithms. Perhaps most importantly, the public would benefit from a growing cultural commons and open access to culture – both sustained by collective investment.

Every time we ask ChatGPT to draft a speech or proofread an email, we are quietly drawing on the labor of millions. In this sense, cultural production has already become a kind of planetary cooperative; updating its funding model to reflect that reality is long overdue. As OpenAI CEO Sam Altman recently observed, the age of AI calls for a broad societal “alignment”: a rewriting of the social contract to address the imbalances created by emerging technologies. That alignment can happen through thoughtful policy or through crisis and disruption. But one way or another, it is coming.

(Mariana Mazzucato is Professor in the Economics of Innovation and Public Value at University College London and the author, most recently, of “The Big Con: How the Consulting Industry Weakens Our Businesses, Infantilizes Our Governments and Warps Our Economies” (Penguin Press, 2023). Fausto Gernone is a PhD student at the UCL Institute for Innovation and Public Purpose.)

 Copyright: Project Syndicate, 2025.