Generative AI depends on quality journalism – and should pay for it
When it comes to the use of Generative AI by journalists, publishers have both responsibilities and rights. In a world where media and other institutions face a lack of trust and credibility, it’s important for news outlets to adopt ethical principles using AI and to be clear to audiences about when generative AI is being used by journalists to gather and analyse data.
Already, many journalistic organisations have adopted such codes of practice.
Reporters Without Borders found that most of the codes focus on some combination of these topics:
- Human oversight of published content
- Transparency regarding the synthetic published content
- Limits set on the use of generative AI by journalists
- The need to protect privacy
- Human responsibility for any published content, and
- The risk of bias embedded in generative AI tools
Publishers are also trying to figure out what content is useful and how to tailor their content to the new prompts, and a world in which grounded data (i.e. data grounded in information that has been verified) and up-to-date verified information are required.
For example, the Axel Springer agreement with OpenAI isn’t just about selling the rights to use the Axel Springer archive; it will require Axel Springer to provide summaries, based on content in its publications, in response to ChatGPT prompts.
But while journalism organisations are embracing Generative AI as part of newsroom practice and are well aware of the ethical considerations and obligations to audiences, they are also wary of the economic impact of large companies profiting from their content. Journalists are fearful that traffic will plummet as generative search replaces news-seeking by audiences.
Large Language Models (LLMs) can synthesise and analyse data from large numbers of sources, which may discourage individuals from consulting the original sources.
Compounding the existing financial difficulties faced by media outlets today is the fear that if LLM companies don’t compensate the original sources, this will inevitably reduce their income sources, drying up their funding, and thus the production of original and valuable information. But matters are worse: there is an old expression GIGO – garbage in, garbage out; the quality of LLMs can only be as good as the quality of the data on which they are trained.
The LLM companies, however, have not fully taken this on board and seemingly do not want to compensate for the knowledge that they use – accordingly, risking that they will ‘kill the goose that lays the golden egg’.
This problem is even more acute if there are multiple LLMs, as then none of them will take responsibility for their collective effects on the information ecosystem. There is even the risk that they will undermine the business model of the search engines, which themselves have been undermining the quality information ecosystem.
News publishers had hoped for attribution and links to their content, but this seems unlikely as reports have emerged that Perplexity and Open AI are unable to consistently cite and link to their sources.
In my new issue brief, AI and the future of journalism, published by UNESCO this week, I note that without payments to publishers or creators, there will be no incentive to produce quality information. Without payment there will be less diversity. If the value from content creation is not shared, then the large monopolies will benefit.
In the issue brief, I also outline several recommendations for governments, media companies and publishers as well as intergovernmental organisations on ways for improvement. These are issues that need to be widely discussed in the near future.
One point of contention is whether the publishers or the creators should receive the payments from the large AI firms.
Journalists can learn from other industries, such as music, where both the artists and their publishers have worked out arrangements
In Brazil musicians have asked for a share of residuals. In Belgium, European copyright directives call for creators to get paid while publisher negotiations with Google don’t.
Publishers feel strongly that they incur the costs and the risks and therefore payments should go to them.
Le Monde signed an agreement with Open AI in 2024 and will give 25% of the revenue to its journalists, as they are stakeholders in the news outlet.
Making decisions about how to value news and how to allocate money to publishers and creators can be done. Moreover, fair payments are necessary, to safeguard the future of news and help preserve the quality of Generative AI results.
The alternative would be a world even more contaminated by misinformation, confusion, lack of trust and no accountability.
- Read the issue brief, here.
- Schiffrin is a Senior Lecturer in Practice and the Director of the Media, Technology and Communications specialisation at Columbia University’s School of International and Public Affairs (SIPA)