PRESS RELEASE

Data privacy and national security the top concerns for PSM in AI procurement

11 December 2025
A new industry report from the Public Media Alliance explores how public service media organisations procure and use AI tools off the market to aid their journalism.
Title for 2nd AI Industry Report
Governance, Geopolitics and Procurement. Credit: Adobe Firefly/PMA

Ensuring data privacy and minimising risks to national and international security are considered the most important responsibilities for public service media (PSM) when procuring commercial AI products for use in journalism, a new report finds.

The industry report, titled PSM and AI Part 2: Governance, Geopolitics, and Procurement, was authored by Professor Kate Wright, Chair in Media and Communications at the University of Edinburgh, and co-authored by Kristian Porter, the CEO of the Public Media Alliance. It is the second industry report from the Responsible AI in International Public Media project, funded by BRAID (Bridging Responsible AI Divides) Fellowships, with support from the UKRI Arts and Humanities Research Council (AHRC), and Edinburgh University.

The first report mapped what AI tools are used by public media organisations in journalism production and how they are used. The second report examines the challenges faced by PSM in trying to procure AI ‘responsibly’, and how the procurement and use of AI is embedded in governance structures.

13 public media organisations of varying incomes from five continents were included in the interviews and data collection, which took place throughout 2024 and 2025.

“The research finds that PSM have distinctive concerns, challenges and values, which shape their efforts to procure AI ‘responsibly’ at a time of heightened security risks and rapid geopolitical change.” – Professor Kate Wright, Lead Author, University of Edinburgh

Key findings:
  • Over half of the AI tools cited by PSMs are based in the US. This could be problematic for public media, given the removal of risk-based AI regulation by the Trump administration, and other actions which could make US-based AI companies vulnerable to political influence.

  • PSMs have specific concerns about what it means to procure AI ‘responsibly’, especially considering their heightened risk of cyberattacks. Key threats include criminal gangs, terrorist groups and hostile states, especially Russia and China.

  • PSM that have legislated roles as ‘critical infrastructure’ during emergencies and crises have particularly strong, distinctive concerns about security. However, PSM raised concerns that some governments could use this to undermine their independence.

  • Despite their shared concerns, many PSM are reluctant to talk openly with one another about their experiences with specific AI tools, in case this exposes their organisations to further privacy and security risks. Informal conversations between PSM with established relationships, tend to exclude low-income organisations directly threatened by authoritarian states.

  • When procuring AI tools, PSMs feel a strong responsibility to audiences to ensure the proper management of data. But there remains uncertainty around whether PSM should use AI tools accused of breaching creators’ copyright.

  • To minimise data privacy and security risks, high income PSMs prefer to develop in-house tools and/or pay premium rates for AI products from large technology companies. However, middle and low-income PSMs are more likely to invest in small and medium-sized AI companies as a way of supporting local start-ups, and meeting budgets.

  • Investigating and piloting new AI tools was time-prohibitive and costly, particularly for low-income PSM. There is interest in developing the creation of an active database of AI tools, including the considerations relevant to PSMs.

Recommendations:

On the basis of these findings, PSMs are urged to:

  1. Consider prioritising AI providers based in full democracies, wherever possible.
  2. Regularly audit AI providers’ privacy policies, the location of data storage and processing, as well as interleaving vulnerabilities pertaining to foundational models.
  3. Be aware that their respective governments could use security concerns to try and compromise their operational independence.
PSM & AI
Part 2: Governance, Geopolitics, and Procurement

November 2025

Lead Author: Professor Kate Wright, Chair of Media and Communication at the University of Edinburgh.
Co-Author: Kristian Porter, CEO of the Public Media Alliance

This work was funded by BRAID, a UK-wide programme dedicated to integrating arts and humanities research more fully into the responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI. BRAID is funded by the Arts and Humanities Research Council (AHRC).

READ REPORT

About the authors

Professor Kate Wright is the Personal Chair in Media and Communications at the University of Edinburgh. A former BBC producer, she has researched the practices and political economies of journalism for the past twenty years.

Kristian Porter

Kristian Porter is the CEO of the Public Media Alliance, a global association of public service media organisations.



For more information, please contact info@publicmediaalliance.org
More Press Releases
Related Posts