ANALYSIS
Four ways public media are adopting AI for news
9 October 2025
From content moderation to fact-checking, how is AI changing news, and newsrooms for public media?

The public service media (PSM) approach to artificial intelligence has generally been, rightly, one of caution. Efforts to invest and experiment with AI are rooted in a set of core values and the promise to uphold principles of responsible and ethical use.
Many, if not most, PSM now have Responsible AI guidelines about where, how and why AI should be used, which take these values into account, and there is often overlap. For example:
- No AI-generated content will ever be published without human approval;
- Content that has been made with, edited by, or embellished by AI, will always be disclosed as such;
- Staff will be provided with training to ensure the responsible use of AI.
But while these guidelines are intended to govern the responsible use of AI, they do not prevent it from being used. And with the eruption of AI products and programmes in recent years, many public media organisations are embracing the opportunities that come with this technology and are using it to advance their public service mission, particularly when it comes to journalism.
However, experimentation and innovation by public media in AI does largely remain the vocation of larger and better-funded organisations. Given their scale, such organisations are more able to procure AI products, if not develop AI products in-house.
In this article, we highlight four ways that AI is being used by public media to enhance or improve their news offering.
AI to moderate social media comments

Public service media have a duty to be active and present on third-party platforms. It’s increasingly a space where audiences go to receive news and information. In both the UK and the US, social media is overtaking traditional forms of media – broadcast and print – in popularity for news. As such, public media, with its mandate to be universal and accessible, must be on such platforms, reaching and engaging audiences with accurate and fact-based news.
Read more: Trust and Tech: Where should RNZ draw the line on AI?
But there are some inherent challenges that come with occupying such a space, and one of the side effects is the exposure to hateful, abusive, and offensive comments. Public media employ people to moderate content, and to protect other social media users – especially as this is a role the platforms themselves seem reluctant to engage with.
At the ABC in Australia, 111 members of staff were involved in this line of work. But this work carries risks, with a survey in 2022 finding that 71 percent received comments denigrating their work weekly, while around half also saw misogynistic and racist content weekly. “The toll on people and their loved ones can be enormous,” said Alexandra Wake, an Associate Professor in Journalism at RMIT University.
Considering this challenge, RNZ has deployed an AI-based social media monitoring tool, to both reduce the hours spent by people on content moderation, while also protecting them from the worst of the online space. The New Zealand-developed tool, Sence, “uses AI to identify and categorise harmful content” on social media. Comments which are deemed to potentially cross the line, are then flagged for a moderator to decide whether to delete the comment.
“Because Sence can monitor 24/7 and is able to do so faster than manual monitoring we expect its introduction to reduce the instances where comments need to be turned off on posts. It means our people won’t be as directly exposed to harmful communication. It also means they can spend more time on producing and distributing content to our audience rather than monitoring comments.” – John Hartevelt, RNZ’s Interim Head of Content.
More on AI and Public Service Media
FIAT/IFTA and PMA announce partnership
30th October 2025
How (and why) RNZ used AI to recreate a dead man’s voice
30th October 2025
Towards Digital Sovereignty: German Media Pilot Shared Data Systems
16th October 2025
AI to create news summaries
It is no surprise, given the immense popularity of social media, that audiences – particularly younger audiences – want bite-sized news, easily accessible and native to mobile devices. That’s why more news companies are offering news summaries, either at the top of the article, or as content in its own right.
“About five years ago, we developed some user principles after qualitative insights with a target group, aged 19 to 29,” said Thomas Nikolai Blekeli, a technological expert at Norway’s public broadcaster, NRK. “These principles contain several important guidelines for better reaching a younger audience. One of those principles was to break content into manageable chunks.”
NRK subsequently created an AI tool to produce summaries, and doing so in the correct style and tone. And NRK isn’t alone. Using AI to produce news summaries has been an area of journalism and news production where there has been widespread consensus that AI can save journalists’ time and effort. Organisations such as Bloomberg, NRK, and the BBC are all onboard, while a Reuters Institute for the Study of Journalism showed 70 percent of the media companies surveyed were planning on using AI for this purpose. NRK, however, developed their own tool, while many others are using off-the-shelf products.
In Singapore, meanwhile, Mediacorp have taken their news summaries one step further, and created FAST: a digital experience that offers news summaries, but in a format that is visually and experientially similar to TikTok, YouTube Shorts, or Instagram Reels. FAST stories can be consumed via the CNA app, and each story links to the full story. Generative AI is used to help produce the summaries, but there is always a human checking the content before it is published.
“As we mark 25 years of service, we are working on new initiatives to grow our existing base, meet new needs, and reach new audiences.” – Mediacorp’s Chairman Niam Chiang Meng.



AI to give you trustworthy answers to your questions
Chatbots, such as Google Gemini, or Chat-GPT, have changed the game for media companies producing news and information. There are financial and ethical issues with this development. In the most high profile of cases, The New York Times sued OpenAI for copyright infringement, arguing that NYT content was used to train OpenAI’s chatbot, Chat-GPT.
But there are also major concerns that chatbots are driving down web traffic for online news media. The Chief Executive of the Financial Times, Jon Slade, recently said they had experienced a 25-30 percent decline in the number of users arriving on their website via search engines. This will have profound implications for digitally native media companies, whose revenue mainly comes from people reading and browsing their content.

Yet beyond those very valid and existential concerns, there is also an issue of trust in the AI-generated answers. BBC research from earlier this year showed that many of these chatbots produce significant errors. Hallucinations, meanwhile, are an inevitable consequence of generative AI, and will never be fixable.
If audiences are using these chatbots to get news and information, but their answers are unreliable, what does this mean for audience access to news and information?
An attempt to rectify this dilemma is underway in Taiwan, where the challenge of false and misleading information is particularly acute. To address the problem, the country’s public media organisations have embarked on a groundbreaking collaborative initiative to produce their own public media AI chatbot. The tool, developed in partnership with the research institute, Academia Sinica, will use the news archives of the country’s public media system, including PTS, Rti, TBS, and CNA.
The Chair of the Public Broadcasting Corporation, Hu Yuanhui said there were three goals of the partnership:
“enhancing the dissemination of credible information, accelerating the digital transformation of public media, and leveraging the positive potential of AI technology and creating a demonstration effect for its application.”
Moving forwards, the working group tasked with developing the tool will create a public-facing chatbot, as well as a separate chatbot for professional use
PSM & AI | Part 1: Mapping how public service media use AI in journalism
This first of two industry reports looking at how public media use AI, based off technical surveys, organisational documents, and interviews with senior managers at thirteen public service media (PSM) organisations with varying levels of income in Africa, Asia, the Caribbean, Europe and Oceania.

AI to pin down which claims need fact-checking
Social media platforms are vulnerable to information manipulation and interference, and are bursting with false and misleading news and information. They’re also run on algorithms that reward divisive content. The issue is severe: the World Economic Forum considers misinformation and disinformation the highest short-term risk. Over 90 percent of the UK population have encountered misinformation online, according to the Alan Turing Institute.
One of the strongest arguments for why public service or public interest media should be on social media platforms is that they are a medicine to this disease, through the dissemination of accurate and fact-based information, and vitally, their role in fact-checking.

But there are two immediate complications when it comes to working out what needs fact-checking: firstly, how to select what claims to prioritise; and secondly, how to ensure fact-checkers aren’t amplifying misinformation, or introducing it to a wider audience. On the first point, a Poynter survey showed newsrooms used three criteria to ascertain what claims needed checking: “harm; the reach of the claim; and the power of those who made it.”
AI is now being employed to assist with the job of identification. In Belgium, a multistakeholder project which saw collaboration between many different partners including the Flemish public broadcaster VRT, specificallytargeted media disinformation. One organisation also involved was the data intelligence company, Textgain, which used the AI algorithm, Factrank to help identify claims for checking. The algorithm “sifts through data to detect factually relevant statements for potential fact-checking … targeting the most widely shared and potentially misleading medical claims online.”
Since 2016, the UK’s eminent fact-checking organisation, FullFact has developed its own software to perform a similar role. Across 24 hours, the tool can monitor over a third of a million sentences, according to FullFact. It can track comments across social media platforms, as well as live TV, podcasts, and online news sites, giving the organisation scope beyond what could otherwise be achieved by humans. The tool groups and classifies claims, before handing them over to the fact-checkers.
Crucially in both instances, these tools are used for identification purposes and are not directly involved in the act of checking, which remains a task performed by humans.
As public media around the world continue to experiment and innovate with AI within their core public service remit, the Public Media Alliance will continue to support and highlight such efforts. Over the next few months, PMA – in partnership with Dr Kate Wright of the University of Edinburgh – will be releasing research reports exploring how public service media use AI in journalism.
Related Posts
19th September 2025
Trust and Tech: Where should RNZ draw the line on AI?
Artificial intelligence is everywhere,…
13th August 2025
How public media organisations use AI: Industry Report
The first of two industry reports…
3rd May 2025
Five ways AI can save and endanger Public Service Media (at the same time)
Develop AI founder Paul McNally shares…






