- Home
- Projects & Resources
- Public Service Media and Generative AI
How should public service media adopt and integrate Generative AI into its work streams? How are public service broadcasters carefully assessing and scrutinising the risks and fears which are raised about Generative AI?
This page is a live resource, with the latest policies, guidelines, strategies and approaches being adopted by public service media – both PMA members and non-members – globally, as well as the latest academic research into AI and public media.
As Generative AI becomes more prevalent and more integrated into media organisations, public service media carry a special responsibility to be transparent and responsible over how they manage the use of it. As multilateral efforts are underway to regulate AI, public broadcasters are already recognising its potential to improve their own processes and output. But PSM are also tempering such opportunism with caution about the potential ramifications of AI and how it might impact their own journalism and public trust in their journalism, as well as the information sphere more broadly.
As outlined below, many of the same concerns and hopes have been made by multiple broadcasters. Yet each organisation might demonstrate a different priority, or a different approach. Swedish Radio has established an AI Council, for example. Yle said it will only use Gen AI technology developed in Finland.
Read more: How public media is adopting AI
There is a shared feeling, however, amongst all broadcasters, that Gen AI can be a force for good, if harnessed correctly, in accordance with the public service values which PSM exhibit. Such strategy documents are therefore imperative to ensure public media workers are employing Gen AI only when such values can be bolstered, not undermined.
Has your public broadcaster unveiled its own guidelines for using Gen AI?
Click on the tabs below to find out how each public broadcaster is reacting to Gen AI.
VRT – Belgium
On 9 November, VRT published its vision on implementing AI to the daily practice of its workers.
In this regard, the public broadcasting organisation is looking into the elaboration of a set of AI policies in line with its independence, trustworthiness and professional integrity.
As mentioned by Tatjana Vandenplas, head of innovation development, the VRT is looking at “potential applications that can solve problems and add value” to their services. The use of generative AI will be focused on the creation of innovative content and making the process more efficient, but also on the automation of supporting process, such as the writing of summaries based on news articles written by journalists of the VRT.
Key domains of focus for the VRT:
- (Re)creating content
- Strengthening accessibility
- Factchecking
The VRT also emphasises that AI can never replace a person but that it is a great tool to assist in different tasks.
BR – Germany
The Bavarian broadcaster has published its AI ethics guidelines, which it says is largely designed to ensure any new technology is utilised for its obligation to provide good journalism. “We want to help shape the constructive collaboration of human and machine intelligence and deploy it towards the goal of improving our journalism,” it reads.
BR has published ten core guidelines that must be met when working with new technologies, but it says the work of its journalists is and will continue to be irreplaceable.
These include how it benefits the user, how it can be used transparently, and keeping track of what data it is being fed. It also must be conscious of the diversity of Bavaria, such as how speech-to-text incorporates dialects.
It says editorial control must be maintained at all times, and any news personalisation must not feed into creating news bubbles for users.
SWR – Germany
AI presents opportunities and possibilities, such as for the expansion of digital research, to produce attractive and timely content and to make workflows more efficient.
But there are complexities and risks, with AI posing fundamental ethical questions for journalism, such as the risk of wrong decisions, legal breaches, or publishing misinformation.
SWR says the use of AI must provide a clear added value for its mission, and the organisation’s journalistic principles must apply without exception. Humans should always keep editorial control, make the final publishing decisions, and bear ultimate responsibility for them. All AI generated content must be transparently labelled in order to maintain credibility.
SWR also seeks to contribute to the development of AI in journalism and is looking to participate in suitable initiatives and partnerships.
ZDF – Germany
ZDF announced its own set of principles for use of Generative AI on 26 October. Nine principles in total have been outlined which will direct how Gen AI might be integrated across the organisation. But the foremost principle is that while Gen AI can support editorial teams, it cannot replace them.
Selected other principles outlined by ZDF:
-
- Commitment to transparency over how, when and where Gen AI is used
- Not using Gen AI as a source
- Content created with support of Gen AI is subject to principles of journalistic due diligence
- Gen AI tools will not be enriched with sensitive data, although some exceptions are possible, if reviewed by the editorial team
“In order to be able to use the opportunities of AI in everyday work, we have to be aware of the risks. That’s why we need the guardrails,” said ZDF’s Editor-in-Chief, Bettina Schausten.
LRT – Lithuania
Lithuanian Radio and Television has published guidelines it says will determine the responsible and ethical use of AI, while still utilising it to help fulfil LRT’s mission to inform, educate and mobilise the public.
The guidelines say editorial responsibility should be maintained at all times, and additional protective measures should be taken when AI is used to ensure LRT remains reliable. Content generated by AI should be clearly marked, the document says, and the security of AI tools should be ensured.
LRT says the AI guidelines apply to all employees of the organisation, as well as all creative employees and external producers who are contributing content to LRT.
RTVE – Spain
On 29 November, RTVE announced its principles of responsible artificial intelligence. The proposal gives a set of guidance for public and private media on how to implement this new technology in their daily practice.
The common thread linking the principles and rules laid down is that AI is considered above all as a tool. In no case would it replace professionals, but rather be used as an aid to improve their tasks and elevate the quality of the service offered by the media. This aligns with the foundational principle of public service media mission: to inform, entertain and educate, thereby upholding and reinforcing public service media’s democratic responsibility.
In this regard, the principles established by RTVE state that AI technologies should be used in accordance with the tenets of democracy and “ensure, when AI is applied, the independence, purpose, universality, accessibility and responsibility of our activities and content” are maintained to “regain the credibility and trust of societies and audience.”
RTVE also mentioned the importance of “implementing AI technologies at the forefront of the media industry in a responsible and human-centred manner,” to embrace innovation in a responsible way.
Swedish Radio – Sweden
On 7 July, Swedish Radio published its policy for how it will use Generative AI, considering both the opportunities and risks presented by AI. The public broadcaster already uses AI in its recommender systems and news curation. It also uses AI tools to transcribe all audio to text.
But, as their News Commissioner, Olle Zachrison, outlined, Gen AI carries “significant risks – around journalism, law and security.” New and developing technologies capable of generating text, images, video, and audio “pose special challenges for serious media players … There are difficult balances to be made here in relation to our journalistic core values.” As such, Swedish Radio has established a company-wide AI council with three main tasks:
-
- Indicate which AI applications are of greatest strategic value
- Identify “journalistic, legal and security issues surrounding AI development and … propose guidelines where necessary”
- Initiate and participate in learning for all Swedish Radio staff
The broadcaster has published its policy document and the broadcaster has also promised to continue engaging in dialogue with other media entities.
PTS – Taiwan
In its AI usage guidelines sent to PTS staff, the public broadcaster outlined five basic principles which would dictate how it employs AI:
-
- Respect for human autonomy
- Avoid harm
- Fairness and common good
- Transparency and accountability
- Upholding public value
On top of these principles, the broadcaster also compiled a more specific usage guide, looking to pre-empt any potential mishaps or errors which could occur if AI was adopted without regulations. These more targeted points include:
-
- One should not use AI to generate text or images for news reports and program content without full disclosure and following the reporting and consultation procedures
- Program production and news reporting should ensure factual accuracy and avoid bias. Multiple sources of information should be adopted, and not solely rely on AI-generated messages.
- Do not broadcast news reports and program content generated with the assistance of AI without review or confirmation.
Such a policy guide is imperative so all employees understand the boundaries through which AI can be used. The guidelines “serve as a benchmark for all personnel to employ AI techniques and tools, further facilitating the responsible and trustworthy development of AI in the field of communication.”
BBC – UK
On 5 October 2023, the BBC unveiled guidelines for how it will approach the use of Generative AI. Headed by Rhodri Talfan Davies, the BBC’s Director of Nations, the broadcaster will look to use the technology in a way that will “benefit all audiences and help us deliver our public mission” while also helping BBC teams to work more “effectively and efficiently”.
But Mr. Davies also outlined his concern over the “new and significant risks” posed by Gen AI “if not harnessed properly.” To ensure the right balance, the BBC has adopted three guiding principles:
-
- “Always act in the best interests of the public”
- “Always prioritise talent and creativity”
- “Be open and transparent”
The BBC announced it will start working on a number of projects using Gen AI, “in order to better understand both the opportunities and the risks”.
Exclusive offer to PMA members
Gain access to the latest Generative Artificial Intelligence Guidelines for your organisation using the PMA members-only discount.
Academic Research: Public Media & Gen AI
Addressing AI Intelligibility in Public Service Journalism
2023
This research explores the use of artificial intelligence at BBC News, and the levels of literacy and understanding staff have of it.
News Personalisation and Public Service Media: The Audience Perspective
2023
This research looks at audience expectations and concerns about news content curated using artificial intelligence.
The Governance of Artificial Intelligence in Public Service Media
2023
This report examines both the opportunities and questions which artificial intelligence raises for public service broadcasters.
PMA Reports: Public Media & Gen AI
16th April 2024
PSM Weekly | 10 – 16 April 2024
Our weekly round-up of public service media related stories and headlines from around the world.
3rd April 2024
VRT NWS is launching a fact-check marathon in the run-up to elections
With the approaching parliamentary and EU elections, the VRT NWS will fact-check political…
12th March 2024
PSM Weekly | 5 – 12 March 2024
Our weekly round-up of public service media related stories and headlines from around the world.
27th February 2024
PSM Weekly | 21 – 27 February 2024
Our weekly round-up of public service media related stories and headlines from around the world.
20th February 2024
PSM Weekly | 14 – 20 February 2024
Our weekly round-up of public service media related stories and headlines from around the world.
13th February 2024
PSM Weekly | 7 – 13 February 2024
Our weekly round-up of public service media related stories and headlines from around the world.
6th February 2024
PSM Weekly | 31 January – 6 Feburary 2024
Our weekly round-up of public service media related stories and headlines from around the world.
30th January 2024
PSM Weekly | 24 – 30 January 2024
Our weekly round-up of public service media related stories and headlines from around the world.
23rd January 2024
PSM Weekly | 17 – 23 January 2024
Our weekly round-up of public service media related stories and headlines from around the world.
16th January 2024
PSM Weekly | 10 – 16 January 2024
Our weekly round-up of public service media related stories and headlines from around the world.
12th January 2024
Tensions and election prove a test for Taiwan media
Taiwan is in the spotlight ahead of an election amid geopolitical tension and an apparent…
8th December 2023
VRT: Embracing AI for the public good
The Head of the VRT Innovation Development explains their directives on usage of generative AI, and…
8th December 2023
PSM Unpacked | Moving to a digital future
Our global membership recently joined our PSM Unpacked roundtable to discuss the opportunities and…
11th October 2023
Generative AI at the BBC
Rhodri Talfan Davies, the BBC’s Director of Nations, sets out the latest on the approach the BBC is…
28th September 2023
Generative AI Guidelines for Media
Our members are eligible for an exclusive discount on the latest guidelines from South 180 on…
- 1
- 2
Featured image: A journalist working on a computer in Newsroom. Credit: Fedorovekb / Shutterstock.com
Secondary image: Abstract tech background. Floating Numbers HUD Background. Matrix particles grid virtual reality. Smart build. Grid core. Hardware quantum form. Credit: Dmitriy Rybin / Shutterstock.com