WPFD 2025 | INSIGHT

Five ways AI can save and endanger Public Service Media (at the same time)

3 May 2025
World Press Freedom Day 2025 focuses on how AI affects press freedom. AI can greatly assist journalists and media organisations, but too far, and it presents a substantial threat, writes Paul McNally, founder of the AI consultancy and training company Develop AI.
An abstract computer circuit board in a blue light - chips and small cables running around.
Abstract computer circuit board. Credit: Adobe Express

By Paul McNally, journalist and founder of Develop AI

I was training journalists in Nairobi, Kenya on how to use AI ethically and effectively last year and on the second day of the workshop, when honesty was finally free-flowing, one editor revealed that they were in a bind. There was no way they could disclose that they were using AI to their audience, or they would have to not implement any of the tools or strategies we were talking about. They were convinced that their audience would consider them too “lazy” if they discovered AI was being used. And regardless of the accuracy or their enjoyment of the content, they would definitely turn away from the outlet if AI was involved. 

Explore: PSM and Responsible AI Guidelines

This has struck me as a vital note in the AI insanity of the last few years. Media organisations have become largely brainwashed by large donors and impact investors to prioritise “membership models” and “audience engagement” as the road to revenue. And though there are success stories of membership models making money, by design it can only work for the top few. The rest are left berating themselves and agonising over their newsletter open rates. So, the idea of AI shattering this close human connection between the writer and the content is understandably distressing. But the truth is, AI is reshaping journalism and the audience doesn’t know what they are going to be upset or enthralled by until they see it. 

In many ways, this transformation feels both thrilling and terrifying. On one hand, AI offers the promise of streamlining production, uncovering hidden patterns in data, and translating or transcribing stories faster than any human. But on the other hand, it poses some deeply uncomfortable questions, especially for those who care about press freedom and the survival of public interest media.

Let’s start with the obvious: tools powered by AI now assist journalists in summarising documents, generating headline suggestions, and even writing entire news articles. For a stretched newsroom, especially in a time of dwindling revenue and shrinking editorial teams this feels like a gift. But it’s not a neutral technology. AI has the potential to spread disinformation as easily as it can share facts. A language model doesn’t care whether it’s helping you investigate corruption or making up a plausible-sounding conspiracy theory. It just does what it’s trained to do: predict the next word. This recent podcast episode highlights very well how we shouldn’t overuse the technology.

More from World Press Freedom Day

Explore our other content marking World Press Freedom Day from this year, and previous years.

Here are five ways AI could support public service journalism and how each could also cause harm if we get lazy, greedy or forget what the point of journalism is in the first place:

1 – Automating the boring stuff

The positive: AI can transcribe interviews, summarise press releases, translate content into local languages, and even generate rough drafts. For under-resourced newsrooms, this is a godsend. It frees up journalists to focus on original reporting, investigations, and community engagement.

Tool example: Notta – Provides AI-based voice-to-text transcription supporting over 100 languages, facilitating multilingual content creation. 

The negative: If media organisations start to rely on AI to create journalism instead of supporting it, you end up with filler content, articles with no soul, no context, and no public value. The efficiency becomes the product. Public interest takes a back seat to “content velocity.”

2 – Reaching new audiences

The positive: AI-driven personalisation can help newsrooms tailor stories to different languages, regions, or even individual readers. This could be transformative in multilingual societies where access to news in your own language isn’t guaranteed. It’s also an opportunity to reach younger audiences on the platforms they actually use.

Tool example: Adobe Target – Facilitates personalised content delivery based on user behavior, enhancing reader engagement. 

The negative: Hyper-personalisation can also create filter bubbles. If every user gets a version of the news tailored to their preferences, we lose shared narratives. Worse, editorial priorities might shift away from what’s important to what’s clickable for each audience segment. Suddenly, public service media starts to behave like an ad-tech company.

3 – Investigating at scale

The positive: AI can analyse massive datasets, such as leaked documents, financial records, social media networks, to spot corruption, disinformation, or environmental abuse. These are tools that empower small teams to punch way above their weight.

Tool example: Bellingcat’s Online Investigations Toolkit – Provides a comprehensive set of tools for open-source investigations, including geolocation and social media analysis. 

The negative: Investigative work that leans too heavily on AI can become detached from human context. Algorithms might miss nuance. Worse, they could flag the wrong patterns, sending journalists down rabbit holes that waste time or mislead. And if the source of your insights is a black-box model with unclear logic, how do you defend your reporting?

4 – Fighting misinformation and disinformation

The positive: AI tools can detect fake news, identify coordinated disinformation campaigns, and trace the spread of viral lies. That’s huge. It arms journalists and the public alike with defenses against bad actors trying to pollute the information ecosystem.

Tool example: Meedan’s Check – This is a fantastic service where you can build a tip line for your community and analyse the results as scale.  

The negative: Automated content moderation often fails to understand irony, dissent, or cultural context. Legitimate voices – especially from marginalised communities – can be suppressed by blunt AI filters trained on the biases of the dominant web. The fight against misinformation becomes a war on complexity.

5 – Enhancing legal and ethical oversight

The positive: AI is increasingly being utilised to assist journalists and editors in identifying potential legal and ethical issues within their content. This includes detecting defamatory statements, ensuring compliance with privacy laws, and maintaining journalistic standards.

Tool example: This isn’t quite that, but Harvey.ai is a legal AI assistant. I assume the name is a reference to Suits.

The negative: Depending solely on AI tools for legal oversight can be risky, as these tools may not fully grasp context or nuanced legal standards, potentially leading to oversight of critical issues. And without proper oversight, AI-generated content may inadvertently include biased or unethical information, undermining the credibility of public service media.


Listen to our podcast on PSM & AI

“A language model doesn’t care whether it’s helping you investigate corruption or making up a plausible-sounding conspiracy theory. It just does what it’s trained to do: predict the next word.”

The big threat

One of the biggest threats to freedom of the press today isn’t an oppressive government or a corrupt billionaire. It’s an algorithm that unintentionally drowns out truth with noise. AI-generated content can blur the line between credible journalism and fake news, especially when it mimics legitimate reporting with uncanny accuracy. If the public can’t trust what they see, that lack of trust spills over into legitimate media. It becomes harder for real journalism to stand out. We’re already seeing AI-created deepfakes used to discredit reporters and attack their credibility online. In the hands of a bad actor, these tools become weapons.

It gets worse when you consider surveillance. AI-powered facial recognition and metadata analysis can be used to track journalists, monitor their communications, and expose their sources. In some parts of the world, this kind of technology has been used to intimidate and silence critical voices. The technology doesn’t care if it’s being used for good or evil. But we, as humans, need to care. We need to set boundaries.

Ideally, we need to acknowledge where AI is involved in the reporting process and be honest with audiences about what that means. But we do need to be careful about ostracizing audiences and appearing “lazy”. This is usually solved by thoroughly talking to an audience and meeting them halfway – they may be okay with certain tasks being AI augmented, but they still want human voices on your podcast. AI can help uncover financial corruption by analysing thousands of leaked documents. It can reveal patterns in misinformation campaigns. It can identify trends in health data or climate reporting that might otherwise be missed. AI can even help us reach more people by translating content into local languages or adapting delivery for different platforms. But this only works if we build systems that are transparent, ethical, and aligned with the values of public service journalism. 


About the author

Paul McNally is an award-winning investigative journalist and media entrepreneur with 20 years of experience starting companies in AI, podcasting and community radio. He is the founder of Develop AI, an AI training and consultancy company and he is the AI Advisor for International Media Support (IMS), helping media organisations across the globe to implement AI effectively and ethically. He has won 16 awards and was a Visiting Nieman Fellow at Harvard in 2016 where he focused on improving the business models of media outlets in Africa.

Related Posts