Tech

CNET Defends Use of AI Blogger After Embarrassing 163-Word Correction: ‘Humans Make Mistakes, Too’

The technology site's top editor said the use of AI to write stories was an “experiment” in line with CNET’s history of “testing new technologies."
GettyImages-94583869 (1)

The tech-focused journalism outfit CNET is dealing with the unfortunate consequences of leaning on artificial intelligence too heavily. An article written by an “AI engine” explaining compound interest now includes an addendum at the bottom that lists five errors contained within the original post: 

Correction, 1:55 p.m. PT Jan. 16: An earlier version of this article suggested a saver would earn $10,300 after a year by depositing $10,000 into a savings account that earns 3% interest compounding annually. The article has been corrected to clarify that the saver would earn $300 on top of their $10,000 principal amount. A similar correction was made to the subsequent example, where the article was corrected to clarify that the saver would earn $304.53 on top of their $10,000 principal amount. The earlier version also incorrectly stated that one-year CDs only compound annually. The earlier version also incorrectly stated how much a consumer would pay monthly on a car loan with an interest rate of 4% over five years. The earlier version also incorrectly stated that a savings account with a slightly lower APR, but compounds more frequently, may be a better choice than an account with a slightly higher APY that compounds less frequently. In that example, APY has been corrected to APR.

Advertisement

CNET began generating explainers using artificial intelligence to generate explainers for the site  in November, the company’s editor-in-chief said on Monday. (Given that the purpose of such stories is essentially to make a play for search-engine traffic, you could fairly describe the whole scheme as assigning robots to write stories for other robots to read.) But the decision didn’t generate much notice until last week, when Frank Landymore at Futurism wrote a story noting that the company had “quietly” instituted the practice. The story gained significant traction online and led to questions about the future role of artificial intelligence in journalism and whether it was too early to lean so heavily on the technology. 

Is your company using artificial intelligence in questionable ways? We want to hear from you. We want to hear from you. From a non-work device, contact our reporter at maxwell.strachan@vice.com or via Signal at 310-614-3752 for extra security.

CNET editor-in-chief Connie Guglielmo addressed the concerns on Monday in a post in which she described the use of AI to write “basic explainers” as an “experiment” in line with CNET’s history of “testing new technologies  and separating the hype from reality.” She hoped the shift would free up staff to focus their time and energy on creating “even more deeply researched stories, analyses, features, testing and advice work we're known for.”  

In the post, Guglielmo said that each AI-generated article was reviewed by a human editor before publication. In an attempt to make that process more transparent, she said, CNET had altered the bylines on the AI-generated articles to make clear a robot wrote them, as well as clearly list the editor who reviewed the copy, and would continue to review AI’s place on the site.

Less than an hour after Guglielmo’s post went live, CNET updated the compound interest explainer with a 167-word correction, fixing errors so elementary that a distracted teenager could catch them, like the incorrect idea that someone who puts $10,000 in a savings account that earns compound interest at a 3 percent annual rate would earn $10,300 the first year. Other articles produced by the AI engine also now include a note at the top that reads: “Editors' note: We are currently reviewing this story for accuracy. If we find errors, we will update and issue corrections.”

Motherboard reached out to CNET to ask whether the site would further alter its approach to AI-created journalism moving forward considering the correction. In a statement sent through a generic press email account with no human name attached, the site seemed to throw the editor assigned to the story under the bus, saying they “are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process, as humans make mistakes, too.” Several humans have definitely made mistakes here, but the one tasked with the miserable job of babysitting the robot is probably least among them.