This is part three in our ongoing series on well documented AI failures. With a focus on creating GenAI trust, Tumeryk provides an AI Trust Score that allows businesses with consumer-facing interactions to assess, monitor, and mitigate bias and hallucinations in generative AI. Today we cover a well-publicized GenAI failure in journalism.
Using generative AI in content creation has often been positioned as an easy, and initial use case. Training on the billions of lines of written documentation and next-best word prediction, generative AI has wowed even the most well-written of us by creating both fiction and non-fiction content. It’s so good that increasingly the challenge is in determining what’s not non-fiction when you expect it to be. Sports Illustrated discovered this in a most public way.
In November 2023, Sports Illustrated made headlines for publishing articles written, as it turns out, by fake AI-generated authors. The two erstwhile journalists, outdoorsman Drew and fitness aficionado Sora included biographies, along with their photos, were actually created by AI. For investigators, it seems, it was easy enough to find their photos available on sites selling AI-generated headshots.
After online publication Futurism covered the fraud, Sports Illustrated deleted several articles from its website.
The Arena Group, the publisher of Sports Illustrated, announced in December 2023 that its board of directors terminated its CEO, Ross Levinsohn. The Arena Group issued a statement explaining that the articles with the AI-generated headshots had been provided by a contractor called AdVon Commerce. Despite the evidence, AdVon disputed the claim that the articles had been AI-generated. Unfortunately, the harm was done and consumer faith and trust in not only Sports Illustrated but also The Arena Group and AdVon has arguably taken a hit.
While it’s likely that the editors at Sports Illustrated knew full well what they were doing and wouldn’t have heeded the advice of any risk assessment tool, organizations today can learn from this experience and consider the benefits of creating and monitoring AI guardrails. Businesses can implement and enforce a scoring solution to quickly identify bias and hallucination in generative AI to minimize potential damage (direct or indirect) and mitigate further harm.
Tumeryk offers an AI Trust Score™, modeled after the FICO® Credit Score, for enterprise AI application developers. This tool helps identify underperforming AI models and establishes automated guardrails to assess, monitor, and mitigate these models before they affect consumers or the public. The Tumeryk AI Trust Score™ is the ultimate in AI Risk Assessment, Mitigation, and Management.