I use Google probably a hundred times a day, searching the internet as I check facts. I’ve been feeling a looming wariness as I watch the company’s downshift into AI, which turned to disgust when I read in the New York Times about a new kind of crime being perpetrated by Google in partnership with “obituary pirates” using artificial intelligence tools.
The Times investigated what happened after Matthew Sachman, a 19-year-old Georgetown University freshman from New York City, fell onto the tracks at a subway station on New Year’s Eve and was killed by a train. Word of his death spread quickly, and a widening circle of friends and acquaintances went online, typing his name into Google with words like “subway,” “accident,” and “obituary.”
No actual news of Sachman’s death had been posted anywhere. But those Google searches produced “a blizzard of poorly written news articles, shady-looking YouTube videos and inaccurate obituaries,” according to the Times, as unethical bot-users filled the void.
The dynamics are twisted. Some of the stories falsely stated that the young man had been stabbed to death in the Bronx. “In the hours after his death,” the Times reported, “his name and likeness ricocheted around a dark corner of the internet, where profiteers using artificial intelligence tools capitalized on the anguish and desperation of the people who were mourning him.”
Reporters traced the source of many of the false stories to a marketer in India, Faisal Shah Kahn, who uses Google’s tools to monitor searches and drive web traffic to bogus articles and videos, hastily created with AI large-language models. The purpose? To create places for Google to serve up ads.
Working from his living room, Kahn has been building up his online advertising business for five years. He told the Times that obituaries make up a huge part of his “content farm.”
Google enables and then profits from this piracy, and Kahn takes a cut. Families have tried to get Google and YouTube (which is owned by Google) to take down these fake obituaries and videos with no response.
Readers of this column know that I am somewhat obsessed with obituaries and the importance of doing them the right way. But that’s not the only reason this story hit me hard. It’s a glimpse of the way AI is threatening to destroy even the most local kind of news possible: news about the lives of our neighbors and loved ones.
We are told AI might be good for journalism. Not likely. Large language models work by scraping massive amounts of data from the web. “That’s a problem for local news,” writes Steven Waldman, a founder of Report for America, in a recent article for the Poynter Institute. “When generative AI turns its thirsty eye toward local data, it will find ‘small language’ ecosystems,” Waldman wrote. Where there’s not much content to reprocess, AI can simply plagiarize and concoct plausible-sounding lies.
Remember “Don’t be evil,” Google’s old motto? (The company dropped it in 2015.) Now we know what real evil looks like.