ИИ-ассистенты продолжают искажать новости: результаты глобального исследования Translation: Headline: AI assistants continue to distort news: results of a global study

Results from a study conducted by the BBC in collaboration with the European Broadcasting Union (EBU) revealed that AI assistants frequently distort news content.

According to the report, AI-powered assistants are already replacing search engines for millions of individuals. Seven percent of online consumers rely on AI to receive news, a figure that rises to 15% among users under 25.

The extensive monitoring involved 22 public media organizations from 18 countries. Professional journalists assessed over 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity based on essential criteria including accuracy, source reliability, separating opinion from fact, and providing context.

The research uncovered numerous systemic flaws in four leading AI tools:

Gemini performed the worst in tests, delivering 76% of its answers with significant issues, over twice the error rates of its competitors. Experts attributed this to the tool’s low effectiveness in identifying credible sources.

A comparison of the research results with data obtained during the initial phase earlier in the year demonstrated that the error rates of AI agents have declined, but remain «still high.»

«The study convincingly shows that these deficiencies are not isolated incidents. They are systemic, cross-border, and multilingual in nature, and we believe this undermines public trust,» stated EBU Deputy Director General Jean-Philippe de Tender.

According to a survey conducted by the BBC, more than a third of British adults trust AI-generated news summaries. Among respondents under 35, this figure reaches half.

The research team believes these findings only highlight the importance of addressing the identified issues.

Experts proposed a set of tools aimed at improving AI assistant responses in news topics and enhancing media literacy among users.

It is worth noting that researchers have identified signs of degradation in AI models attributed to the use of content from social networks.