Back to Projects

Responsible AI in International Public Media

What?

This 18-month project will seek to improve knowledge exchange and collaboration between international PSM on responsible AI, and result in extensive research and an industry report that will help to shape broader industry and regulatory discussions. 

The project will provide two key contributions: First, a systematic map of why and how AI is being integrated into international news media production; and second, an exploration of how news staff understand “responsibility”, including how they address dilemmas between conflicting obligations to multiple publics and how AI is effectively deployed with consideration to core PSM values. 

Specifically, the project will conduct a technical audit of the different AI tools used by international PSM, their capabilities, the data they work with, and the roles they play. This will be supplemented via analysis of internal documentation and semi-structured interviews with senior executives. The project will also be an opportunity to explore journalists’ values-in-action via contrasting case studies of AI-enabled international news production within different organisations, languages, and countries.  

Why?

Trustworthy news is crucial to democracy and challenges the spread of mis- and disinformation. High-quality international news can also facilitate cross-cultural dialogue and educate people about the risks that they—and others—face in an interconnected world. At a time of growing authoritarianism, global pandemics, complex conflicts and climate change, it has never been so needed. 

AI has the potential to help news organisations grow in a sustainable way by reducing the notoriously high costs of producing multimedia, multiplatform, and often multilingual, international coverage. But surprisingly, no one has yet researched how AI is used within international news production—let alone what it might mean to use it responsibly. 

When?

This project will run 18 months from May 2024.


tech background digital safety

New partnership to research responsible AI in international public media

PMA is delighted to announce a new partnership to research the responsible use of AI by international public service media. 

Read More



Project partners


The project is being led by Dr Kate Wright, Senior Lecturer (Assoc. Prof) in Media and Communication at the University of Edinburgh. It is funded by BRAID (Bridging Responsible AI Divides) Fellowships, with support from the UKRI Arts and Humanities Research Council, and Edinburgh University. 

Featured image:The Belgian Flemish public broadcaster, VRT, has adopted new generative AI guidelines to dictate how it employs the technology. Credit: VRT