INSIGHT

What do people think of Generative AI?

18th June 2024
AI raises all manner of complex issues, particularly with its use in news. But it’s also something that should be responsibly experimented with, writes the BBC’s Peter Archer.
BBC
LONDON- APRIL, 2019: The BBC or British Broadcasting Corporation headquarters building on Portland Place. Credit: Willy Barton/Shutterstock.com

This Insight was originally published on the BBC.


By Peter Archer, BBC Programme Director of Generative Artificial Intelligence

Generative AI (or GenAI) burst into the public’s consciousness at the end of 2022 when OpenAI launched ChatGPT. Within two months of its launch, ChatGPT had 100 million users, a milestone that took TikTok 9 months, and Facebook four years. Since then, it’s raised a number of profound questions for the media industry – from whether and how media companies should use GenAI tools, to questions of copyright and IP, and how the industry might be reshaped over the longer-term by tools that radically change how content can be created across text, audio and video.

We believe it is vital that the BBC experiments with Gen AI and does so responsibly. This means being tuned into the views and expectations of audiences, artists, and the wider creative industry. The issues GenAI raises are complex and are in the news a lot, but we shouldn’t shy away from exploring how GenAI might bring value both in how we work and what we create.

Explore: Public service media and AI (Resource) 

As part of this, it’s vital we have a good understanding of what audiences think and want when it comes to GenAI. Much of this learning will come through experimentation and pilots, but audience research also has an important role to play in ensuring we keep in step with audiences. Unsurprisingly for a such a new field, we found there has been relatively little research into audience views of how GenAI is used in media.

So, in conjunction with Ipsos UK, we commissioned new research to assess perception of AI including detailed interviews with 150 people from the UK, USA and Australia.  We asked them questions about their thoughts, reactions and feelings towards GenAI being used for content they want to consume within news, audio and video.

The findings have highlighted a few critical insights. The full report can be found here:

  1. Many people believe GenAI is the real deal – a permanent, significant and disruptive step-change in technology and media. They said it feels like a world-shifting innovation, like the introduction/development of the internet, with the capability to change society. 58% of UK adults agreed that products that use AI will profoundly change daily life in the future.
  2. People are nervous about how GenAI might be used in the media, including how it may impact creative roles, and they want reassurance from media companies about how they will engage with Gen AI. 65% of UK adults agree products and services using AI make them nervous.
  3. Many people have already developed an instinct about where they are likely to be comfortable – or not – with the use of GenAI in media.
  • There is some openness to the use of GenAI in audio content – many feel there is space for GenAI in the production of audio content, particularly where it feels like an extension of existing technology – such as improving personalisation, or producers using it as they would a search engine to help generate ideas and inspiration
  • People are more concerned about using GenAI for video content – while audiences are somewhat comfortable with the use of GenAI for personalisation or operational functions, many feel going beyond this could undermine human connection and devalue artistic skills
  • Using GenAI in journalism is felt to be very high risk – while people recognise GenAI could bring value in a few limited ways (such as by reformatting articles from text to audio), there are concerns about its potential to spread misinformation, deepen societal division, and replace human interpretation and insight.

Listen toour podcast

Uncovering and exploring the biggest
issues facing public media

Quite reasonably, while audiences expect all organisations to use GenAI responsibly, they are likely to hold public service media organisations to a particularly high standard given their public missions and access to public funding.  They also emphasised the need for media companies to deliver value; to put people and human creativity first, and to always be transparent in their use of GenAI.

These views support the guiding principles that we, the BBC, have already developed to guide our work in Gen AI. As a reminder, those are that we will:

  • always act in the best interests of the public
  • always prioritise talent and creativity
  • always be open and transparent with audiences when we use AI to support content-making

To support these principles we have also published editorial guidance on the use of AI at the BBC.

It’s important to note that while this new report helps us understand how audiences view the use of Gen AI in media, it is not a roadmap for future development. It doesn’t consider the views of other communities like creative talent or production teams, whose views are very important to how any media organisation uses GenAI. Nor does it consider the technology itself, including questions of accuracy and the incorrect or misleading results that AI models can generate (hallucinations).

We believe it is vital that the BBC experiments with Gen AI and does so responsibly. This means being tuned into the views and expectations of audiences, artists, and the wider creative industry.

Many of the issues raised in the report are not unique to the BBC. So it’s important that we work together with our colleagues in the media industry to discuss the opportunities and challenges of GenAI.

There’s also the simple truth that this is new territory for all of us, and many people’s views are likely to evolve as the potential of GenAI becomes better understood and its use becomes more commonplace. We will reflect carefully on what audiences have told us and use it as one input to shape further experimentation.

Thanks for reading.


About the author

Peter Archer is the BBC Programme Director of Generative Artificial Intelligence.

This article was originally published on the BBC.