Loading...

AI-written articles in US publication fail fact-check, plagiarism tests

AI-written articles in US publication fail fact-check, plagiarism tests
Photo Credit: 123RF.com
Loading...

US-headquartered news publication Cnet, which is owned and operated by media conglomerate Red Ventures, on Wednesday said that it would pause publishing stories written by an in-house artificial intelligence (AI) tool, after facing public criticism over articles that contained factual errors, plagiarized content, and Cnet’s failure to disclose the source behind the articles.

In a note published in public by Connie Guglielmo, editor in chief of Cnet, said that since one article authored by the publication’s ‘Money’ team was flagged “rightly” for factual errors and plagiarized content, the company did “a full audit” of its AI-written articles.

While Guglielmo’s post said that Cnet had published 77 articles written by AI between November last year and last week, it did not confirm how many articles contained errors. A report by The Verge claimed that Cnet found errors in 41 of its 77 published AI-written articles — more than half of them.

Loading...

In her statement, Gugliemo said, “We identified additional stories that required correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague.”

Addressing criticisms of plagiarism, she added that the company’s in-house plagiarism checking tool “either wasn't properly used by the editor, or it failed to catch sentences or partial sentences that closely resembled the original language.” Cnet will now seemingly develop “additional ways to flag exact or similar matches to other published content identified by the AI tool, including automatic citations and external links for proprietary information such as data points or direct quotes.”

Since the errors have been flagged, Cnet has added a note on top of articles that were scripted, at least partially, by its AI tool. At the moment, the disclosure note said, “This article was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff.”

Loading...

To be sure, Cnet is not the first publication to have used some form of advanced automation in publishing of content. For instance, publications have had AI and machine learning-based in-house automation tools such as Heliograf (of The Washington Post), Bertie (of Forbes) and Cyborg (of Bloomberg) for years. In India, content production and management firms such as Pepper Content also have their own, in-house AI tool, called Peppertype.

However, the role of AI in content produced by publications has been brought to the forefront in lieu of the advent of publicly available AI writing tools, most notable of which is OpenAI’s ChatGPT. These AI engines, powered by a technology called Large Language Modules (LLMs), are trained on billions of data points to understand context and authenticity. Notably, in his visit to India this month, Satya Nadella, chairman and chief executive of Microsoft, showcased during his keynote a demo of ChatGPT being used to write a one-act play featuring street foods of Mumbai as characters — thereby showcasing the ability of AI writing tools in scripting creative content.

In its journalism and media trends prediction for 2023, The Reuters Institute and University of Oxford noted that advances in AI, made through last year, “have laid bare more immediate opportunities — and challenges — for journalism.”

Loading...

“AI offers the chance for publishers (finally) to deliver more personal information and formats, to help deal with channel fragmentation and information overload. But, these new technologies will also bring existential and ethical questions — along with more deepfakes, deep porn, and other synthetic media,” the note added.


Sign up for Newsletter

Select your Newsletter frequency