On Monday, a brand-new Reddit account popped up on the widely read forum r/AmItheAsshole, where users have their personal disputes arbitrated by strangers. This particular user asked if they had crossed a line by “refusing to babysit my stepmother’s kids because I have my own job and responsibilities.” The post itself was succinct, straightforward, and grammatically clean, explaining a situation in which the person’s stepmother and father often expected them to provide childcare on little notice, eventually leading to an argument.
“Now there’s tension at home, and I’m starting to wonder if I handled it the wrong way,” the redditor concluded. “I do understand that raising kids is stressful, but I also feel like I shouldn’t be obligated to take on that responsibility when it’s not my role.” The responses to this individual were largely supportive: The kids were not theirs to look after, many people replied, and moving out of the house would be the best course of action.
But according to AI detection software developed by Pangram Labs—which claims an accuracy rate of 99.98 percent and a false positive rate of just one in 10,000—the original story of family discord was AI-generated.
I saw it flagged as AI content while scrolling the page thanks to the latest version of Pangram’s Chrome extension, which rolls out to the public this week; at the paid tier of $20 per month, the tool scans posts on social sites including Reddit, X, LinkedIn, Medium, and Substack in real time, labeling them as human-written, AI-generated, or drafted with assistance from AI. The analysis also includes a measure of Pangram’s confidence in the conclusion: low, medium, or high.
Researchers have found AI slop everywhere online. It undermines journalism and social platforms alike. Text generated at least in part by AI accounts for more than a third of all new websites as of 2025, according to a study published this month by researchers at Stanford University, the Imperial College of London, and the Internet Archive. (The researchers used earlier Pangram tools to arrive at their findings.)
It’s this mess that Max Spero, CEO of Pangram and a self-professed “slop janitor,” wants to help clean up. He tells WIRED that adding instant analysis to the company’s browser extension offers people a more seamless way of checking for AI content across the sites they frequent.
“By providing proactive checks, it can be a lot more useful to people who just generally care about not seeing slop,” Spero explains. “It’s a big lift to go paste some text into an external tool. People just aren’t going to do that.”
Of course, made-up scenarios are nothing out of the ordinary on subreddits like r/AmItheAsshole, where trolls have been known to post engagement bait consisting of especially absurd fictions. Yet even a discerning reader may not suspect a relatively unremarkable narrative like the one described above to potentially be fake. (The redditor who shared it did not respond to a request for comment regarding whether they had used AI or what they hoped to achieve with the post, which they later deleted.)
While no AI detection system is perfect, Pangram’s is regarded as the most consistent and accurate by third-party researchers at several universities; a 2025 University of Chicago study auditing AI detection software gave Pangram its highest rating and noted that its false positive rate was nearly zero, especially on longer passages. Spero says that one reason it outperforms competitors is that it’s trained in part on “harder examples that are closer to the boundary between AI and human.” I was unable to make it generate a false positive when testing it on articles published in WIRED.




