How should public service media adopt and integrate Generative AI into its work streams? How are public service broadcasters carefully assessing and scrutinising the risks and fears which are raised about Generative AI? 

This page is a live resource, with the latest policies, guidelines, strategies and approaches being adopted by public service media – both PMA members and non-members – globally, as well as the latest academic research into AI and public media.

As Generative AI becomes more prevalent and more integrated into media organisations, public service media carry a special responsibility to be transparent and responsible over how they manage the use of it. As multilateral efforts are underway to regulate AI, public broadcasters are already recognising its potential to improve their own processes and output. But PSM are also tempering such opportunism with caution about the potential ramifications of AI and how it might impact their own journalism and public trust in their journalism, as well as the information sphere more broadly.

As outlined below, many of the same concerns and hopes have been made by multiple broadcasters. Yet each organisation might demonstrate a different priority, or a different approach. Swedish Radio has established an AI Council, for example. Yle said it will only use Gen AI technology developed in Finland.

Read more: How public media is adopting AI

There is a shared feeling, however, amongst all broadcasters, that Gen AI can be a force for good, if harnessed correctly, in accordance with the public service values which PSM exhibit. Such strategy documents are therefore imperative to ensure public media workers are employing Gen AI only when such values can be bolstered, not undermined.


Listen to our podcast on PSM & AI!


Click on the tabs below to find out how each public broadcaster is reacting to Gen AI. 

VRT – Belgium

On 9 November, VRT published its vision on implementing AI to the daily practice of its workers.  

In this regard, the public broadcasting organisation is looking into the elaboration of a set of AI policies in line with its independence, trustworthiness and professional integrity.  

As mentioned by Tatjana Vandenplas, head of innovation development, the VRT is looking at “potential applications that can solve problems and add value” to their services. The use of generative AI will be focused on the creation of innovative content and making the process more efficient, but also on the automation of supporting process, such as the writing of summaries based on news articles written by journalists of the VRT.   

Key domains of focus for the VRT:  

  • (Re)creating content 
  • Strengthening accessibility 
  • Factchecking 

The VRT also emphasises that AI can never replace a person but that it is a great tool to assist in different tasks.  

CBC/Radio-Canada – Canada

The emergence of AI is a real turning point for the media industry. In order to grasp the opportunities and tackle the many challenges it brings, CBC/Radio-Canada developed a set of principles to guide their use of AI in their operations.

Maintaining high journalistic standards and practices is vital for CBC/Radio-Canada, who designed their guidelines to ensure their public service mission can be carried on and enriched with the help of AI.

CBC/Radio-Canada’s approach to AI revolves around a safe, transparent and ethical use of AI tools. It also focuses on the improvement of their audience’s experience of their services, to make them as accessible as possible. One of the public broadcaster’s principles is also to collaborate with other media actors, education, research and technology stakeholders to “protect trusted news and combat disinformation”

Read more [Policy - English]
France Télévisions – France

With the rapid development and growing presence of artificial intelligence in society, France Télévisions has included guidelines in its code of conduct on the use of generative artificial intelligence.

The public broadcaster insists that no content should be written by an artificial intelligence, although where there are some exceptions, this should always be done under human journalistic supervision. In an effort of transparency to the audience, any content produced with the help of Gen AI has to be labelled.

In addition to this, the organisation has created a service called ‘Les révélateurs de France TV’ to check content produced externally, before it goes on air. This service also has the task of detecting deepfakes.

Read more [Policy - French]
BR – Germany

The Bavarian broadcaster has published its AI ethics guidelines, which it says is largely designed to ensure any new technology is utilised for its obligation to provide good journalism. “We want to help shape the constructive collaboration of human and machine intelligence and deploy it towards the goal of improving our journalism,” it reads. 

BR has published ten core guidelines that must be met when working with new technologies, but it says the work of its journalists is and will continue to be irreplaceable. 

These include how it benefits the user, how it can be used transparently, and keeping track of what data it is being fed. It also must be conscious of the diversity of Bavaria, such as how speech-to-text incorporates dialects. 

It says editorial control must be maintained at all times, and any news personalisation must not feed into creating news bubbles for users.  

Read more [Policy - English]
SWR – Germany

AI presents opportunities and possibilities, such as for the expansion of digital research, to produce attractive and timely content and to make workflows more efficient. 

But there are complexities and risks, with AI posing fundamental ethical questions for journalism, such as the risk of wrong decisions, legal breaches, or publishing misinformation. 

SWR says the use of AI must provide a clear added value for its mission, and the organisation’s journalistic principles must apply without exception. Humans should always keep editorial control, make the final publishing decisions, and bear ultimate responsibility for them. All AI generated content must be transparently labelled in order to maintain credibility. 

SWR also seeks to contribute to the development of AI in journalism and is looking to participate in suitable initiatives and partnerships. 

ZDF – Germany

ZDF announced its own set of principles for use of Generative AI on 26 October. Nine principles in total have been outlined which will direct how Gen AI might be integrated across the organisation. But the foremost principle is that while Gen AI can support editorial teams, it cannot replace them.

Selected other principles outlined by ZDF:

    1. Commitment to transparency over how, when and where Gen AI is used
    2. Not using Gen AI as a source
    3. Content created with support of Gen AI is subject to principles of journalistic due diligence
    4. Gen AI tools will not be enriched with sensitive data, although some exceptions are possible, if reviewed by the editorial team

“In order to be able to use the opportunities of AI in everyday work, we have to be aware of the risks. That’s why we need the guardrails,” said ZDF’s Editor-in-Chief, Bettina Schausten.

LRT – Lithuania

Lithuanian Radio and Television has published guidelines it says will determine the responsible and ethical use of AI, while still utilising it to help fulfil LRT’s mission to inform, educate and mobilise the public. 

The guidelines say editorial responsibility should be maintained at all times, and additional protective measures should be taken when AI is used to ensure LRT remains reliable. Content generated by AI should be clearly marked, the document says, and the security of AI tools should be ensured. 

LRT says the AI guidelines apply to all employees of the organisation, as well as all creative employees and external producers who are contributing content to LRT.

RNZ – New Zealand

In its vision of creating “Outstanding Public Media that Matters”, RNZ has embraced the opportunities offered by new technological tools, such as generative AI, to support the workflow of the editorial team and assist their story telling.

In order to do so while upholding its mission to serve the public interest, RNZ has established a set of key principles which focus on transparency, oversight, and the ethical use of AI in the public broadcaster’s workflow.

Like many other broadcasters, RNZ emphasised that Gen AI remains a tool and stated “it will generally not publish, broadcast or otherwise knowingly disseminate work created by generative AI,”.

One of the fundamental principles of the public broadcaster’s AI guidelines is the necessity to maintain a relationship of trust with the public. “Above all, RNZ’s audiences need to be able to trust that our storytelling is robust, credible, and transparent.”

RTVE – Spain

On 29 November, RTVE announced its principles of responsible artificial intelligence. The proposal gives a set of guidance for public and private media on how to implement this new technology in their daily practice.  

The common thread linking the principles and rules laid down is that AI is considered above all as a tool. In no case would it replace professionals, but rather be used as an aid to improve their tasks and elevate the quality of the service offered by the media. This aligns with the foundational principle of public service media mission: to inform, entertain and educate, thereby upholding and reinforcing public service media’s democratic responsibility. 

In this regard, the principles established by RTVE state that AI technologies should be used in accordance with the tenets of democracy and “ensure, when AI is applied, the independence, purpose, universality, accessibility and responsibility of our activities and content” are maintained to “regain the credibility and trust of societies and audience.”  

RTVE also mentioned the importance of “implementing AI technologies at the forefront of the media industry in a responsible and human-centred manner,” to embrace innovation in a responsible way. 

Read more [Policy - Spanish]
Swedish Radio – Sweden

On 7 July, Swedish Radio published its policy for how it will use Generative AI, considering both the opportunities and risks presented by AI. The public broadcaster already uses AI in its recommender systems and news curation. It also uses AI tools to transcribe all audio to text.

But, as their News Commissioner, Olle Zachrison, outlined, Gen AI carries “significant risks – around journalism, law and security.” New and developing technologies capable of generating text, images, video, and audio “pose special challenges for serious media players … There are difficult balances to be made here in relation to our journalistic core values.” As such, Swedish Radio has established a company-wide AI council with three main tasks:

    1. Indicate which AI applications are of greatest strategic value
    2. Identify “journalistic, legal and security issues surrounding AI development and … propose guidelines where necessary”
    3. Initiate and participate in learning for all Swedish Radio staff

The broadcaster has published its policy document and the broadcaster has also promised to continue engaging in dialogue with other media entities.

SRG SSR – Switzerland

In an effort to adapt to the rapidly evolving field of Artificial Intelligence and Gen AI, and the changes it brings to the media sector, SRG SSR has developed a series of principles to guide their use of these tools.

Revolving around the potential but also the risks that AI can bring to the broadcaster, these principles establish a framework for using AI to enhance the quality of SRG’s journalism.

At the core of these guidelines, however, SRG SSR affirms the responsibility of humans over AI, as well as the organisation’s transparency on the use of the technology and the respect of confidentiality when using it.

PTS – Taiwan

In its AI usage guidelines sent to PTS staff, the public broadcaster outlined five basic principles which would dictate how it employs AI:

    1. Respect for human autonomy
    2. Avoid harm
    3. Fairness and common good
    4. Transparency and accountability
    5. Upholding public value

On top of these principles, the broadcaster also compiled a more specific usage guide, looking to pre-empt any potential mishaps or errors which could occur if AI was adopted without regulations. These more targeted points include:

    • One should not use AI to generate text or images for news reports and program content without full disclosure and following the reporting and consultation procedures
    • Program production and news reporting should ensure factual accuracy and avoid bias. Multiple sources of information should be adopted, and not solely rely on AI-generated messages.
    • Do not broadcast news reports and program content generated with the assistance of AI without review or confirmation.

Such a policy guide is imperative so all employees understand the boundaries through which AI can be used. The guidelines “serve as a benchmark for all personnel to employ AI techniques and tools, further facilitating the responsible and trustworthy development of AI in the field of communication.”

Read more
BBC – UK

On 5 October 2023, the BBC unveiled guidelines for how it will approach the use of Generative AI. Headed by Rhodri Talfan Davies, the BBC’s Director of Nations, the broadcaster will look to use the technology in a way that will “benefit all audiences and help us deliver our public mission” while also helping BBC teams to work more “effectively and efficiently”.

But Mr. Davies also outlined his concern over the “new and significant risks” posed by Gen AI “if not harnessed properly.” To ensure the right balance, the BBC has adopted three guiding principles:

    1. “Always act in the best interests of the public”
    2. “Always prioritise talent and creativity”
    3. “Be open and transparent”

The BBC announced it will start working on a number of projects using Gen AI, “in order to better understand both the opportunities and the risks”.

Read more

Exclusive offer to PMA members

Gain access to the latest Generative Artificial Intelligence Guidelines for your organisation using the PMA members-only discount.

Find out more

Academic Research: Public Media & Gen AI

Addressing AI Intelligibility in Public Service Journalism

2023
This research explores the use of artificial intelligence at BBC News, and the levels of literacy and understanding staff have of it.

Read More

News Personalisation and Public Service Media: The Audience Perspective

2023
This research looks at audience expectations and concerns about news content curated using artificial intelligence.

Read More

The Governance of Artificial Intelligence in Public Service Media

2023
This report examines both the opportunities and questions which artificial intelligence raises for public service broadcasters.  

Read More


PMA Reports: Public Media & Gen AI

World Press Freedom Day | The PMA Briefing

On World Press Freedom Day - attacks on press freedom, journalism faces pressure from AI and…

Read More

WPFD 2025: Our renewed commitment to defend public media

In this WPFD message, PMA's CEO warns of escalating media freedom threats and outlines our renewed…

Read More

Five ways AI can save and endanger Public Service Media (at the same time)

Develop AI founder Paul McNally shares with us five ways how AI can save and endanger public…

Read More

AI and Journalism: A new headline for news

Karel Degraeve, Innovation Expert at VRT NWS, shares his views on the challenges and opportunities…

Read More

Curious about a news story? – Swedish Radio tests AI supported news search

SR is the first media in Sweden to invite the public to test a new type of news search combining…

Read More

BBC research shows issues with Artificial Intelligence (AI) assistants

Conducted over a month, the study saw the BBC test four prominent, publicly available AI…

Read More

Lessons from ABC’s podcast episode title optimisation pilot

The ABC looked if AI could help to create podcast titles and descriptions that are the perfect…

Read More

Artificial Intelligence: A protective tool for journalists?

Some broadcasters are using AI for good, and thinking of innovative ways it can be employed to…

Read More

How can AI bolster the mission of PSM?

The integration of AI by PSM needs safeguards through policies and proper governance, PMA's CEO…

Read More

ABC Assist: Designing an AI application for responsibility in the UX

ABC's Digital Product team developed ABC Assist, a powerful AI tool that improves research process…

Read More

PMA runs workshop on Responsible AI & Public Media

Our latest workshop focussed on how to best deploy AI tools in a responsible way, maintaining and…

Read More

RTP’s use of a rich media ecosystem to reach younger audience

João Galveias of RTP explores the different ways and challenges for public broadcasters to reach a…

Read More

What do people think of generative AI?

AI raises all manner of complex issues, particularly with its use in news. But it's also something…

Read More

Regional PMA workshop on Responsible AI and Public Media

PMA is proud to launch a new workshop on Responsible AI for broadcasters across Southern Africa in…

Read More

New partnership to research responsible AI in international public media

PMA is delighted to announce a new partnership to research the responsible use of AI by…

Read More


Featured image: A journalist working on a computer in Newsroom. Credit: Fedorovekb / Shutterstock.com 

Secondary image: Abstract tech background. Floating Numbers HUD Background. Matrix particles grid virtual reality. Smart build. Grid core. Hardware quantum form. Credit: Dmitriy Rybin / Shutterstock.com