Guidance note on Artificial Intelligence
These guidelines have been developed to assist Press Council members. They are not binding and they do not extend the Press Code. No complaint may be lodged in terms of these guidelines.
Introduction
Artificial Intelligence (AI) offers journalism and the media new tools in many areas of work, including in data analysis, in production, understanding audiences, targeting and much else. But generative AI can create text, visuals and audio that are often difficult to distinguish from material created by humans. Thus, reactions to AI in journalism have ranged from enthusiastic acceptance to strong fears that it will add to the destruction of journalism jobs and boost misinformation.
Though it is too early to foresee the impact with any certainty, it is important for news organisations, newsroom leaders and journalists to be thoughtful when they deploy new AI tools, and to consider them in the light of the ethical principles that support audience trust in journalism.
At all times, any use of AI should be for the benefit of journalism and audiences.
In this spirit, the Press Council offers the following guidance note. The Press Code remains the authoritative document, and the existing rules in the Code apply fully. These notes are simply an attempt to draw out some of the specific implications that may arise.
1. Accountability
Member publications retain editorial responsibility for everything that is published, no matter which tools are used in production. To ensure compliance, any AI-generated material must be checked by human eyes and hands.
2. Accuracy
Generative AI is known to be prone to the invention of facts (known as ‘hallucinations’). Journalists should carefully check facts in an AI-generated text. AI tools have made it easier to generate misinformation. Claims circulating on social media and elsewhere need to be even more carefully checked than before.
3. Bias
Algorithms reflect and amplify race, gender and other biases that emerge in published material. Media organisations should keep a keen lookout for bias when using AI tools and correct them where they do.
4. Transparency
News organisations should offer their audiences maximum transparency about their use of AI tools. A comprehensive statement of the organisation’s policy and use of specific tools should be easily available to audiences, and kept current. If tools have been used in the generation of particular items, this should be indicated clearly.
5. Targeting
AI tools used to tailor content to audience preferences should be used in a way that guards against the creation of filter-bubbles.
6. Organisational considerations
AI tools may relieve journalists of some routine tasks. Media organisations should not use AI innovations simply to cut costs. Any savings should be reinvested in quality journalism. Staff should be given training in the use of AI, to enable them to adapt to new technological requirements.
7. Privacy
Personal data may be used in the development of AI systems, and member publications should take care that relevant rights and legislation (like the Protection of Personal Information Act or POPIA) are not infringed.
8. Intellectual property
The training sets used by generative AI use large amounts of data without acknowledging the intellectual property rights of the originators. This includes text published by news media. Though solutions to the problem are not yet clear, journalists and media organisations need to be aware of the issue, both with respect to their own intellectual property and their use of AI tools that may not have fully recognised the rights of others. Care should be taken with exposing journalists’ own, unpublished texts available to generative AI tools, to ensure confidentiality and security is maintained.
Other resources
This list is expanding very quickly. Here is a small selection:
- Find out more about JournalismAI, a centre at the London School of Economics (LSE), from https://www.lse.ac.uk/media-and-communications/polis/JournalismAI/About-JournalismAI
- Charlie Becket’s 2019 report, ‘New powers, new responsibilities: A global survey of journalism and artificial intelligence’, published in Polis, can be read here: https://www.lse.ac.uk/media-and-communications/polis/JournalismAI/The-Report
- Hannes Cools and Nicholas Diakopoulos’s 2023 report, ‘Writing guidelines for the use of AI in your newsroom? Here are some, er, guidelines for that’, published by Nieman Lab, can be read here: https://www.niemanlab.org/2023/07/writing-guidelines-for-the-role-of-ai-in-your-newsroom-here-are-some-er-guidelines-for-that/
- The 2023 Paris Charter on AI and Journalism, can be read here: https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20charter%20on%20AI%20in%20Journalism.pdf
- Patricia Ventura Pocino’s 2021 paper, ‘Algorithms in the newsrooms: Challenges and recommendations for artificial intelligence with the ethical values of journalism’, published by the Catalan Press Council, can be accessed here: https://www.researchgate.net/publication/377979122_Artificial_Intelligence_in_Newsrooms_Ethical_Challenges_Facing_Journalists#:~:text=The%20results%20concluded%20that%20the,use%20of%20AI%20in%20journalism.
- The Guardian’s approach to generative AI is available here: https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai