New partnership to research responsible AI in international public media

16th May 2024
The Public Media Alliance is delighted to announce a new partnership to research the responsible use of AI by international public service media. 
tech background digital safety
Abstract tech background. Floating Numbers HUD Background. Matrix particles grid virtual reality. Smart build. Grid core. Hardware quantum form. Credit: Dmitriy Rybin /

A new research project has been launched in partnership with the Public Media Alliance to investigate the opportunities for responsible AI within international public service media (PSM). The goal is to explore how AI can help build trustworthy news delivery. 

Funded by the BRAID Fellowship and led by Dr Kate Wright (University of Edinburgh), the project will use PMA’s extensive network to research best practices in the deployment of AI in an ethical, value-driven way.  

Read more: VRT: Embracing AI for the public good (Insight)

This 18-month project will seek to improve knowledge exchange and collaboration between international PSM on responsible AI, and result in extensive research and an industry report that will help to shape broader industry and regulatory discussions.  


Trustworthy news is crucial to democracy and challenges the spread of mis- and disinformation. High-quality international news can also facilitate cross-cultural dialogue and educate people about the risks that they—and others—face in an interconnected world. At a time of growing authoritarianism, global pandemics, complex conflicts and climate change, it has never been so needed. 

AI has the potential to help news organisations grow in a sustainable way by reducing the notoriously high costs of producing multimedia, multiplatform, and often multilingual, international coverage. But surprisingly, no one has yet researched how AI is used within international news production—let alone what it might mean to use it responsibly. 


The project will provide two key contributions: First, a systematic map of why and how AI is being integrated into international news media production; and second, an exploration of how news staff understand “responsibility”, including how they address dilemmas between conflicting obligations to multiple publics and how AI is effectively deployed with consideration to core PSM values. 

Specifically, the project will conduct a technical audit of the different AI tools used by international PSM, their capabilities, the data they work with, and the roles they play. This will be supplemented via analysis of internal documentation and semi-structured interviews with senior executives. The project will also be an opportunity to explore journalists’ values-in-action via contrasting case studies of AI-enabled international news production within different organisations, languages, and countries.  

International PSM provide an ideal testbed for research into the responsible use of AI within international news because these major networks are continually adopting and developing AI tools to support their demanding work. They are best known for providing independent news to audiences abroad, including elite policymakers, marginalised and displaced groups, and those with little or no free media. But they also disseminate content via national public broadcasters, online and/or social media.   

Public Service Media and Generative AI

How have public service media responded to the emergence and accessibility of generative AI? Our live resource explores the different strategies PSM have employed to control the use of AI, and also includes the latest research and headlines related to AI.

Man with headphones on a laptop with video editing software open on the screen.
The Belgian Flemish public broadcaster, VRT, has adopted new generative AI guidelines to dictate how it employs the technology. Credit: VRT
What are the BRAID Fellowships?

The fellowships support individual researchers to work in partnership with public, private and third sector organisations to address current challenges in the field of responsible AI. 

By working on common challenges across sectors and with a range of stakeholders, the programme aims to support leadership in the field of responsible AI, create lasting connections, and directly impact how AI is considered, developed, and deployed in research, in practice, and in society at large. 

This year’s fellowships run from May for between 12 to 18 months.  

PMA thanks BRAID, the UKRI Arts and Humanities Research Council, and Edinburgh University for supporting this initiative. 

Want to get involved?

The more participating organisations there are, the better.  

Email to register your interest in participating in this project.