The Reuters Institute at Oxford University has released its report on a survey focused on understanding if and how people use generative Artificial Intelligence (AI), and what they think about its application in journalism and other areas of work and life across six countries (Argentina, Denmark, France, Japan, the UK, and the USA).
Findings on the public’s use of generative AI
ChatGPT is by far the most widely recognised generative AI product – around 50% of the online population in the six countries surveyed have heard of it. It is also by far the most widely used generative AI tool in the six countries surveyed. That being said, frequent use of ChatGPT is rare, with just 1% using it on a daily basis in Japan, rising to 2% in France and the UK, and 7% in the USA. Many of those who say they have used generative AI have used it just once or twice, and it is yet to become part of people’s routine internet use.
In more detail, we find:
- While there is widespread awareness of generative AI overall, a sizable minority of the public – between 20% and 30% of the online population in the six countries surveyed – have not heard of any of the most popular AI tools.
- In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot.
- Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18–24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over.
- Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%).
- Just 5% across the six countries covered say that they have used generative AI to get the latest news.
Findings on public opinion about the use of generative AI in different sectors
Most of the public expect generative AI to have a large impact on virtually every sector of society in the next five years, ranging from 51% expecting a large impact on political parties to 66% for news media and 66% for science. But, there is significant variation in whether people expect different sectors to use AI responsibly – ranging from around half trusting scientists and healthcare professionals to do so, to less than one-third trusting social media companies, politicians, and news media to use generative AI responsibly.
In more detail, we find:
- Expectations around the impact of generative AI in the coming years are broadly similar across age, gender, and education, except for expectations around what impact generative AI will have for ordinary people – younger respondents are much more likely to expect a large impact in their own lives than older people are.
- Asked if they think that generative AI will make their life better or worse, a plurality in four of the six countries covered answered ‘better’, but many have no strong views, and a significant minority believe it will make their life worse. People’s expectations when asked whether generative AI will make society better or worse are generally more pessimistic.
- Asked whether generative AI will make different sectors better or worse, there is considerable optimism around science, healthcare, and many daily routine activities, including in the media space and entertainment (where there are 17 percentage points more optimists than pessimists), and considerable pessimism for issues including cost of living, job security, and news (8 percentage points more pessimists than optimists).
- When asked their views on the impact of generative AI, between one-third and half of our respondents opted for middle options or answered ‘don’t know’. While some have clear and strong views, many have not made up their mind.
Findings on public opinion about the use of generative AI in journalism
Asked to assess what they think news produced mostly by AI with some human oversight might mean for the quality of news, people tend to expect it to be less trustworthy and less transparent, but more up to date and (by a large margin) cheaper for publishers to produce. Very few people (8%) think that news produced by AI will be more worth paying for compared to news produced by humans.
In more detail, we find:
- Much of the public think that journalists are currently using generative AI to complete certain tasks, with 43% thinking that they always or often use it for editing spelling and grammar, 29% for writing headlines, and 27% for writing the text of an article.
- Around one-third (32%) of respondents think that human editors check AI outputs to make sure they are correct or of a high standard before publishing them.
- People are generally more comfortable with news produced by human journalists than by AI.
- Although people are generally wary, there is somewhat more comfort with using news produced mostly by AI with some human oversight when it comes to soft news topics like fashion (+7 percentage point difference between comfortable and uncomfortable) and sport (+5) than with ‘hard’ news topics, including international affairs (-21) and, especially, politics (-33).
- Asked whether news that has been produced mostly by AI with some human oversight should be labelled as such, the vast majority of respondents want at least some disclosure or labelling. Only 5% of our respondents say none of the use cases we listed need to be disclosed.
- There is less consensus on what uses should be disclosed or labelled. Around one-third think ‘editing the spelling and grammar of an article’ (32%) and ‘writing a headline’ (35%) should be disclosed, rising to around half for ‘writing the text of an article’ (47%) and ‘data analysis’ (47%).
- Again, when asked their views on generative AI in journalism, between a third and half of our respondents opted for neutral middle options or answered ‘don’t know’, reflecting a large degree of uncertainty and/or recognition of complexity.
The full report is available at the Reuters Institute at Oxford University.