How should public service media adopt and integrate Generative AI into its work streams? How are public service broadcasters carefully assessing and scrutinising the risks and fears which are raised about Generative AI? 

This page is a live resource, with the latest policies, guidelines, strategies and approaches being adopted by public service media – both PMA members and non-members – globally, as well as the latest academic research into AI and public media.

As Generative AI becomes more prevalent and more integrated into media organisations, public service media carry a special responsibility to be transparent and responsible over how they manage the use of it. As multilateral efforts are underway to regulate AI, public broadcasters are already recognising its potential to improve their own processes and output. But PSM are also tempering such opportunism with caution about the potential ramifications of AI and how it might impact their own journalism and public trust in their journalism, as well as the information sphere more broadly.

As outlined below, many of the same concerns and hopes have been made by multiple broadcasters. Yet each organisation might demonstrate a different priority, or a different approach. Swedish Radio has established an AI Council, for example. Yle said it will only use Gen AI technology developed in Finland.

Read more: How public media is adopting AI

There is a shared feeling, however, amongst all broadcasters, that Gen AI can be a force for good, if harnessed correctly, in accordance with the public service values which PSM exhibit. Such strategy documents are therefore imperative to ensure public media workers are employing Gen AI only when such values can be bolstered, not undermined.


Listen to our podcast on PSM & AI!


Click on the tabs below to find out how each public broadcaster is reacting to Gen AI. 

ORF – Austria

In February 2025, the ORF published a set of guidelines on the use of AI across the organisation, committing to the transparent use of the fast-evolving technology and applying to employees and external partners.

This new set of guidelines focuses on how the technology can be used to enhance public service quality standards and further improve the efficiency of its operations. It includes rules that would ensure that AI usage remains consistent with ORF’s values ​​and editorial standards, such as equality, impartiality, and privacy protection. The importance of human responsibility and agency is also central to these guidelines.

According to Harald Kräuter, Director of Technology and Digitalisation at ORF, the adoption of AI “means boldly moving forward and continuously developing ORF technologically and journalistically”. AI technologies would be used at ORF to support editorial teams by “creating freedom and more time for demanding journalistic activities such as in-depth research, fact-checking, and creative design processes.”

VRT – Belgium

On 9 November, VRT published its vision on implementing AI to the daily practice of its workers.  

In this regard, the public broadcasting organisation is looking into the elaboration of a set of AI policies in line with its independence, trustworthiness and professional integrity.  

As mentioned by Tatjana Vandenplas, head of innovation development, the VRT is looking at “potential applications that can solve problems and add value” to their services. The use of generative AI will be focused on the creation of innovative content and making the process more efficient, but also on the automation of supporting process, such as the writing of summaries based on news articles written by journalists of the VRT.   

Key domains of focus for the VRT:  

  • (Re)creating content 
  • Strengthening accessibility 
  • Factchecking 

The VRT also emphasises that AI can never replace a person but that it is a great tool to assist in different tasks.  

CBC/Radio-Canada – Canada

The emergence of AI is a real turning point for the media industry. In order to grasp the opportunities and tackle the many challenges it brings, CBC/Radio-Canada developed a set of principles to guide their use of AI in their operations.

Maintaining high journalistic standards and practices is vital for CBC/Radio-Canada, who designed their guidelines to ensure their public service mission can be carried on and enriched with the help of AI.

CBC/Radio-Canada’s approach to AI revolves around a safe, transparent and ethical use of AI tools. It also focuses on the improvement of their audience’s experience of their services, to make them as accessible as possible. One of the public broadcaster’s principles is also to collaborate with other media actors, education, research and technology stakeholders to “protect trusted news and combat disinformation”

Read more [Policy - English]
France Médias Monde – France

France Médias Monde (FMM), published its guide of good practices relative to the use of AI in the group’s editorial activities.

In this living documents which will evolve through time to take account of AI technological advancement, the organisation outlines the key principles of AI tool usage, emphasising that these recommendations comply with FFM’s rules of ethics and editorial safety.

Like many other public broadcasters, FMM makes a point of ensuring that the use of AI is always supervised by a human and restricts the processing of sensitive or confidential information via open tools.

The use of AI in the newsrooms of France 24, RFI and Monte Carlo Doualiya must remain as an assistance to editorial production, as long as it ensures or betters the quality, the originality of the content and reduces the processing time of document analysis.

First and foremost, FMM advocates for a transparent use of AI, both internally and to the public, using markers to indicate whether content has been generated using AI. The aim is to maintain the audience’s trust in FMM’s work. As such, AI cannot be used to generate realistic sound, photo or video content that could leave the audience in doubt about the content’s authenticity.

Read more [Policy - French]
France Télévisions – France

With the rapid development and growing presence of artificial intelligence in society, France Télévisions has included guidelines in its code of conduct on the use of generative artificial intelligence.

The public broadcaster insists that no content should be written by an artificial intelligence, although where there are some exceptions, this should always be done under human journalistic supervision. In an effort of transparency to the audience, any content produced with the help of Gen AI has to be labelled.

In addition to this, the organisation has created a service called ‘Les révélateurs de France TV’ to check content produced externally, before it goes on air. This service also has the task of detecting deepfakes.

Read more [Policy - French]
BR – Germany

The Bavarian broadcaster has published its AI ethics guidelines, which it says is largely designed to ensure any new technology is utilised for its obligation to provide good journalism. “We want to help shape the constructive collaboration of human and machine intelligence and deploy it towards the goal of improving our journalism,” it reads. 

BR has published ten core guidelines that must be met when working with new technologies, but it says the work of its journalists is and will continue to be irreplaceable. 

These include how it benefits the user, how it can be used transparently, and keeping track of what data it is being fed. It also must be conscious of the diversity of Bavaria, such as how speech-to-text incorporates dialects. 

It says editorial control must be maintained at all times, and any news personalisation must not feed into creating news bubbles for users.  

Read more [Policy - English]
SWR – Germany

AI presents opportunities and possibilities, such as for the expansion of digital research, to produce attractive and timely content and to make workflows more efficient. 

But there are complexities and risks, with AI posing fundamental ethical questions for journalism, such as the risk of wrong decisions, legal breaches, or publishing misinformation. 

SWR says the use of AI must provide a clear added value for its mission, and the organisation’s journalistic principles must apply without exception. Humans should always keep editorial control, make the final publishing decisions, and bear ultimate responsibility for them. All AI generated content must be transparently labelled in order to maintain credibility. 

SWR also seeks to contribute to the development of AI in journalism and is looking to participate in suitable initiatives and partnerships. 

ZDF – Germany

ZDF announced its own set of principles for use of Generative AI on 26 October. Nine principles in total have been outlined which will direct how Gen AI might be integrated across the organisation. But the foremost principle is that while Gen AI can support editorial teams, it cannot replace them.

Selected other principles outlined by ZDF:

    1. Commitment to transparency over how, when and where Gen AI is used
    2. Not using Gen AI as a source
    3. Content created with support of Gen AI is subject to principles of journalistic due diligence
    4. Gen AI tools will not be enriched with sensitive data, although some exceptions are possible, if reviewed by the editorial team

“In order to be able to use the opportunities of AI in everyday work, we have to be aware of the risks. That’s why we need the guardrails,” said ZDF’s Editor-in-Chief, Bettina Schausten.

LRT – Lithuania

Lithuanian Radio and Television has published guidelines it says will determine the responsible and ethical use of AI, while still utilising it to help fulfil LRT’s mission to inform, educate and mobilise the public. 

The guidelines say editorial responsibility should be maintained at all times, and additional protective measures should be taken when AI is used to ensure LRT remains reliable. Content generated by AI should be clearly marked, the document says, and the security of AI tools should be ensured. 

LRT says the AI guidelines apply to all employees of the organisation, as well as all creative employees and external producers who are contributing content to LRT.

RNZ – New Zealand

In its vision of creating “Outstanding Public Media that Matters”, RNZ has embraced the opportunities offered by new technological tools, such as generative AI, to support the workflow of the editorial team and assist their story telling.

In order to do so while upholding its mission to serve the public interest, RNZ has established a set of key principles which focus on transparency, oversight, and the ethical use of AI in the public broadcaster’s workflow.

Like many other broadcasters, RNZ emphasised that Gen AI remains a tool and stated “it will generally not publish, broadcast or otherwise knowingly disseminate work created by generative AI,”.

One of the fundamental principles of the public broadcaster’s AI guidelines is the necessity to maintain a relationship of trust with the public. “Above all, RNZ’s audiences need to be able to trust that our storytelling is robust, credible, and transparent.”

RTVE – Spain

On 29 November, RTVE announced its principles of responsible artificial intelligence. The proposal gives a set of guidance for public and private media on how to implement this new technology in their daily practice.  

The common thread linking the principles and rules laid down is that AI is considered above all as a tool. In no case would it replace professionals, but rather be used as an aid to improve their tasks and elevate the quality of the service offered by the media. This aligns with the foundational principle of public service media mission: to inform, entertain and educate, thereby upholding and reinforcing public service media’s democratic responsibility. 

In this regard, the principles established by RTVE state that AI technologies should be used in accordance with the tenets of democracy and “ensure, when AI is applied, the independence, purpose, universality, accessibility and responsibility of our activities and content” are maintained to “regain the credibility and trust of societies and audience.”  

RTVE also mentioned the importance of “implementing AI technologies at the forefront of the media industry in a responsible and human-centred manner,” to embrace innovation in a responsible way. 

Read more [Policy - Spanish]
Swedish Radio – Sweden

On 7 July, Swedish Radio published its policy for how it will use Generative AI, considering both the opportunities and risks presented by AI. The public broadcaster already uses AI in its recommender systems and news curation. It also uses AI tools to transcribe all audio to text.

But, as their News Commissioner, Olle Zachrison, outlined, Gen AI carries “significant risks – around journalism, law and security.” New and developing technologies capable of generating text, images, video, and audio “pose special challenges for serious media players … There are difficult balances to be made here in relation to our journalistic core values.” As such, Swedish Radio has established a company-wide AI council with three main tasks:

    1. Indicate which AI applications are of greatest strategic value
    2. Identify “journalistic, legal and security issues surrounding AI development and … propose guidelines where necessary”
    3. Initiate and participate in learning for all Swedish Radio staff

The broadcaster has published its policy document and the broadcaster has also promised to continue engaging in dialogue with other media entities.

SRG SSR – Switzerland

In an effort to adapt to the rapidly evolving field of Artificial Intelligence and Gen AI, and the changes it brings to the media sector, SRG SSR has developed a series of principles to guide their use of these tools.

Revolving around the potential but also the risks that AI can bring to the broadcaster, these principles establish a framework for using AI to enhance the quality of SRG’s journalism.

At the core of these guidelines, however, SRG SSR affirms the responsibility of humans over AI, as well as the organisation’s transparency on the use of the technology and the respect of confidentiality when using it.

PTS – Taiwan

In its AI usage guidelines sent to PTS staff, the public broadcaster outlined five basic principles which would dictate how it employs AI:

    1. Respect for human autonomy
    2. Avoid harm
    3. Fairness and common good
    4. Transparency and accountability
    5. Upholding public value

On top of these principles, the broadcaster also compiled a more specific usage guide, looking to pre-empt any potential mishaps or errors which could occur if AI was adopted without regulations. These more targeted points include:

    • One should not use AI to generate text or images for news reports and program content without full disclosure and following the reporting and consultation procedures
    • Program production and news reporting should ensure factual accuracy and avoid bias. Multiple sources of information should be adopted, and not solely rely on AI-generated messages.
    • Do not broadcast news reports and program content generated with the assistance of AI without review or confirmation.

Such a policy guide is imperative so all employees understand the boundaries through which AI can be used. The guidelines “serve as a benchmark for all personnel to employ AI techniques and tools, further facilitating the responsible and trustworthy development of AI in the field of communication.”

Read more
BBC – UK

On 5 October 2023, the BBC unveiled guidelines for how it will approach the use of Generative AI. Headed by Rhodri Talfan Davies, the BBC’s Director of Nations, the broadcaster will look to use the technology in a way that will “benefit all audiences and help us deliver our public mission” while also helping BBC teams to work more “effectively and efficiently”.

But Mr. Davies also outlined his concern over the “new and significant risks” posed by Gen AI “if not harnessed properly.” To ensure the right balance, the BBC has adopted three guiding principles:

    1. “Always act in the best interests of the public”
    2. “Always prioritise talent and creativity”
    3. “Be open and transparent”

The BBC announced it will start working on a number of projects using Gen AI, “in order to better understand both the opportunities and the risks”.

Read more

Exclusive offer to PMA members

Gain access to the latest Generative Artificial Intelligence Guidelines for your organisation using the PMA members-only discount.

Find out more

Academic Research: Public Media & Gen AI

Nothing found.


PMA Reports: Public Media & Gen AI

Regional PMA workshop on Responsible AI and Public Media

PMA is proud to launch a new workshop on Responsible AI for broadcasters across Southern Africa in…

Read More

New partnership to research responsible AI in international public media

PMA is delighted to announce a new partnership to research the responsible use of AI by…

Read More

VRT NWS is launching a fact-check marathon in the run-up to elections

With the approaching parliamentary and EU elections, the VRT NWS will fact-check political…

Read More

Tensions and election prove a test for Taiwan media

Taiwan is in the spotlight ahead of an election amid geopolitical tension and an apparent…

Read More

VRT: Embracing AI for the public good

The Head of the VRT Innovation Development explains their directives on usage of generative AI, and…

Read More

PSM Unpacked | Moving to a digital future

Our global membership recently joined our PSM Unpacked roundtable to discuss the opportunities and…

Read More

Generative AI at the BBC

Rhodri Talfan Davies, the BBC’s Director of Nations, sets out the latest on the approach the BBC is…

Read More

Generative AI Guidelines for Media

Our members are eligible for an exclusive discount on the latest guidelines from South 180 on…

Read More

Swedish Radio publishes policy for generative AI

Swedish Radio has been actively exploring how AI can strengthen our offer to listeners and make our…

Read More

How public media is adopting AI

PSM has continued to pioneer adapting Artificial Intelligence (AI) into their workflows, whilst…

Read More

“More than a tool”: RTVE uses AI tech to cover local elections

RTVE is revolutionising its approach to covering local elections using artificial intelligence…

Read More

KBS unveils AI-powered VVERTIGO

KBS has introduced VVERTIGO, a unique AI-based system that produces reframed videos by…

Read More

KBS uses “cutting edge technology” in 2022 election coverage

South Korea’s largest and most trusted public broadcaster used technological innovation to push its…

Read More

Radiodays Europe 2021: Public media’s latest innovations

How are public media redefining their audio content to stay relevant and reach audiences across…

Read More


Featured image: A journalist working on a computer in Newsroom. Credit: Fedorovekb / Shutterstock.com 

Secondary image: Abstract tech background. Floating Numbers HUD Background. Matrix particles grid virtual reality. Smart build. Grid core. Hardware quantum form. Credit: Dmitriy Rybin / Shutterstock.com