SPEECH
How can AI bolster PSM’s mission?
12th September 2024
There are many exciting opportunities and possibilities for public service media (PSM) with Artificial Intelligence (AI), PMA’s CEO, Kristian Porter, said in a recent keynote address in Taiwan. But the deployment of AI must be carefully controlled through proper policy and governance, which adhere to fundamental public service media values.
In early September, PMA partnered with Taiwan’s Public Television Service (PTS) to hold their annual symposium, which this year was focussed on “The Challenges and Opportunities Facing PSM in the Age of AI.”
The event brought together industry experts, with a focus on specific projects which public service media (PSM) are trialling and prototyping. The speakers were from Deutsche Welle, France Télévisions, NHK, PBS, RNZ, RTVE, and SVT. Topics included how AI can help bolster and safeguard minority languages; how AI can be used in journalism and news; how AI can improve organisational efficiency; and how AI can be used in children’s content. A PMA podcast exploring these projects will be released later this month.
Explore: Public Service Media and AI (Resource)
The Public Media Alliance’s CEO, Kristian Porter, was in Taipei, to give the event’s keynote, where he focussed on the structures required to ensure any experimentation with AI adheres closely to public service media values. The speech has been edited for brevity and clarity.
Speech by Kristian Porter, CEO of the Public Media Alliance
Nǐ hǎo,
This is a timely and important event… I won’t be the first to warn you that artificial intelligence has the capacity to wreak havoc upon the media industry as we know it: From how it can be used to challenge irrefutable truth and spread mis- and disinformation, to how it could engage our industry in a race to the bottom, especially if media owners see AI as an opportunity to replace humans with machines. So, it is essential that we are aware of the risks AI poses, while also being prepared for how it can benefit us.
We’re going to spend much of today hearing from experts about their projects, innovations, and collaborations, which highlight the best of public service media and AI.
There are some incredible things happening in this space, and I want to quickly reference just two examples that demonstrate how AI is being deployed to further advance public service principles and values. In Sweden, Swedish Radio are using AI to improve searchability of audio content, and are also exploring how it will improve the accessibility of their apps, especially for the hard-of-hearing. And in South Korea, the Korean Broadcasting System have focussed on AI in the production of live content, enabling the tracking of multiple objects using a single 8K camera, which is helping to reduce the number of cameras needed in live shows by 70%.
But instead of talking any more about the exciting stuff – the glitz and the glamour of AI – that’s for our other speakers to do – I want to focus more on the serious side. And that is to highlight the very important steps we, as public service media, need to take – and are already taking – to mitigate the risks of AI, so that its potential can be harnessed in a responsible way.
At the Public Media Alliance, we talk a lot about the values of public service media – combined, these are values that set public media apart, but they are also values that can be applied to other news media, especially those who want to prioritise trust and accuracy.
We believe that these values need to be carried across all disciplines – whether you’re producing a light-hearted TV show, or a hard-hitting radio news programme.
And there should be no difference when it comes to public service media integrating AI into workstreams and content production processes.
These values – of trust, a commitment to truth, accountability, transparency, independence – should act as a baseline, and must all be maintained. They are an important guide when adopting AI throughout an organisation.
Without the correct safeguards and infrastructure in place to ensure AI is being used effectively and creatively, media entities risk losing audience trust. They risk disseminating false information, and they risk undermining the other work their staff do so well.
Strategies and Guidelines
What does this infrastructure look like?
Over the past few years, we have seen the emergence of so-called AI-policies, guidelines or strategies. Like editorial guidelines, these AI policies are very simple but essential documents, acting as a guide for all staff to understand how, where and when, AI can be used. They determine the rules, and they establish the priorities that shows a commitment to truth and accuracy. – many strategies, for example, outline that no piece of content that has either been partially or fully created with generative AI, will ever be published without human approval.
Some, like the Associated Press use the 80/20 rule, where 20% of processes require human interaction to edit, fact-check, and so on. Some policies say that any content produced by AI, even partially, must be publicly labelled as such – a watermark that demonstrates a commitment to transparency. One initiative, Content Credentials, was born from a collaboration between private and public media to create a watermark that verifies the origin of an image or video – the BBC and others are experimenting with this now. And another common principle is the promise to use algorithms for purely public service principles: to build trust, be transparent and explain how you’ve structured and determined your use of AI.
By creating these policies, you create the framework within which your staff can safely operate. Such policies demonstrate to your number one stakeholder, the public, that you’re taking your role seriously and how you differ from the big tech companies. Take the use of algorithms, or recommendation systems, for example. For Big Tech, their algorithms are unclear, often unavailable, unknown. So be transparent, show that you are public service-oriented, and that you can be trusted as a brand in more ways than one.
Listen toour podcast
Uncovering and exploring the biggest
issues facing public media
Governance
New roles are also emerging within public media to oversee the roll-out of the technology across the organisation. Some public media are also looking to include their wider staff on that journey. Some have launched AI Councils, bringing together employees from across the organisation, to have an influence on the direction they take.
Including staff on that journey and providing them with a voice both in terms of identifying where AI could help solve problems, and what projects could be invested in, is hugely valuable, and is one of the best ways to stimulate innovative ideas and empower staff.
Many public broadcasters – including many of those who are represented here today – have a special role when it comes to supporting and protecting minority languages. So, when you think about the word ‘responsible’, don’t think of it just in terms of your obligation to use AI ‘responsibly’. But think of it in terms of your larger social ‘responsibility’. Treat it in a way that can bolster that mission.
Generative AI has the capacity to impact and influence the work of every department across public service media. So having a dedicated team overseeing, managing and reviewing the rollout of generative AI across these teams is essential. Some PSM are placing an AI lead within each department.
But ‘responsible’ is really the key word throughout all of this.
How PMA is promoting ‘responsibility’ with AI
At PMA, we are working in this space on two projects.
Firstly, with funding from BRAID – Bridging Responsible AI Divides – and with support from UKRI Arts and Humanities Research Council and the University of Edinburgh, PMA is working with Dr Kate Wright to explore how generative AI is being used by public service media globally, specifically with regards to international news.
Secondly, we recently held a pilot workshop in Johannesburg for six PMA members across Southern Africa to discuss how AI can be effectively rolled out across organisations, in a controlled, relevant, and responsible manner, with plans to replicate it in other regions.
And there was one point raised by participants which really stuck with me during that workshop. They felt that within the AI industry, and across many AI tools, there is an inherent lack of representation and a “western bias”, especially regarding local, regional and minority languages. This is undoubtedly true, when many developers are based in the G20 countries, and so the technology often skews towards English, Spanish, French and Mandarin speaking societies.
But many public broadcasters – including many of those who are represented here today – have a special role when it comes to supporting and protecting minority languages.
So, when you think about the word ‘responsible’, don’t think of it just in terms of your obligation to use AI ‘responsibly’. But think of it in terms of your larger social ‘responsibility’. Treat it in a way that can bolster that mission.
And this is happening. ABC and SBS in Australia, Thai PBS, and Japan’s NHK, for example, are using AI to develop text-to-speech engines, and to improve accessibility and in-language services. AI has a lot of potential in this space and could be an incredible asset for that really critical language-preservation mission which public service media has.
So before I finish, I’d like to leave you with three broad suggestions for implementing AI policies at your media organisation:
- Develop a policy so everyone in your organisation knows where, when and how AI can be used. Don’t leave anyone behind.
- If you can, hire or dedicate a senior staffer tasked with developing this policy, and overseeing its effective rollout… and empower leads on AI within each department.
- Engage your staff and your audience about AI… remind them that this is an exciting time, but also that you are listening to and finding solutions to their concerns.
Thank you very much for your time and thank you once again to PTS for hosting this event.
Xiè xie
PTS is a member of the Public Media Alliance.
Related Posts
11th September 2024
ABC Assist: Designing an AI application for responsibility in the UX
ABC's Digital Product team developed…
16th May 2024
New partnership to research responsible AI in international public media
PMA is delighted to announce a new…
10th April 2024
After 6 Year Hiatus, PTS Taiwan Holds 11th Taiwan International Children’s Film Festival
The 11th edition of TICFF in 2024…