Every day I get emails telling me how AI is transforming the news business and how I can revolutionize the way we do our work at the Independent.
“Welcome to the end of creator’s block,” one recent email announced. “How will you use your AI superpowers?”
I found some clues about what AI can actually do for a small news organization in an article published this week by the Nieman Lab, Harvard’s venerable journalism foundation, which has enthusiastically jumped on the AI bandwagon. The piece, by reporter Andrew Deck, is titled “Local Newsrooms Are Building AI Chatbots Fast and Cheap,” an oddly upbeat headline considering what’s in the article. It describes a project launched by the Center for Innovation and Sustainability in Local Media at the University of North Carolina, which funded the development of chatbots at four small newsrooms.
The hype about the promise of AI to reverse the well-documented decline in local journalism is intense, and big money is behind it. “There’s a lot of pressure from the industry in terms of funders and journalism support organizations,” Sarah Vassello of the UNC innovation group told Deck. Those funders want local newsrooms to use AI, but there’s a hitch: they can’t say exactly how it’s going to be useful. So, the researchers set out to discover that, creating and testing chatbots in real newsrooms.
The experiment did not go well.
The people at the News Reporter, a weekly paper in Columbus County, N.C., thought that a chatbot could answer basic questions from readers and handle subscription matters. To “keep the chatbots on topic and minimize errors,” the designers of the bots limited their “knowledge base” to the newspaper’s own archives and did not enable them to search the web. But users asked inane questions, like “Who is the current president of the U.S.?” and the bot gave a mindless answer: “I don’t know.”
The News Reporter ran the exercise for 45 days. In that time, only 11 people used the bot. And it gave false answers to the questions it did receive. “We chose not to invest further in the project,” said the newspaper’s Rachel Smith.
At Chapelboro, a news site in Chapel Hill, N.C., they gave their chatbot a friendly name, “Chappy,” but readers asked them to take it down because it was “hallucinating” — making up false answers even to basic questions like the location of the newspaper’s office.
At the Henrico Citizen in Virginia, the new chatbot, “Henry,” took so much time to update that it diverted reporters from their work. According to Jessica Mahone, interim director of the UNC center, Henry “made too many mistakes to the point where it was not aligned with their mission.”
Still, the researchers were undaunted, arguing that the chatbots “might not be perfect” but were still worth pursuing.
I think they have forgotten what journalism is. Hallucinating facts is not some curious flaw to be brushed aside — it is completely antithetical to what a newspaper exists to do.
There are plenty of other places to find made-up facts these days.