This is part four in our ongoing series on well documented AI failures. With a focus on creating GenAI trust, Tumeryk provides an AI Trust Score™ that allows businesses with consumer-facing interactions to assess, monitor, and mitigate bias and hallucinations in generative AI. Today we cover a well-publicized GenAI failure in marketing, although frankly, this will only be our first coverage of such failures of generative AI in marketing as there are so many examples that could have been easily mitigated with an AI Trust Score™.
Many of you have undoubtedly used one of the many food service delivery options. And, whether you spend much time on a review site like Yelp or not, you know that a photo is worth 10,000 words. So, as we all know, if a review doesn’t include photographs we’re far more likely to skip the review and purchase from restaurants that include photos of their food.
In order to optimize its delivery service, Uber Eats decided to leverage AI to generate photos of food for restaurants that didn’t provide any. Unfortunately, but the generative AI technology couldn’t tell the difference when the same term — “medium pie” — was used to describe a pizza pie and a dessert pie. While we all know instinctively that a pizza and a cherry pie are different things and delivered in different restaurants, the AI failed at this distinction. In the same example, the Generative AI invented a brand of ranch dressing called “Lelnach” and showed a bottle of that in the picture used by Uber Eats.
While arguably Uber Eats didn’t experience any direct financial harm due to the clear AI failures, they did suffer reputational harm. Users likely viewed the mistake with humor but undoubtedly many also wondered if actual food photos were really or fictional. Uber Eats reputation and brand value took a hit.
As so many of these stories illustrate, it wouldn’t have been difficult for Uber Eats to implement an AI Risk Assessment and use generative AI in a more trustworthy manner. This failure, while seemingly innocuous, could have easily been mitigated and used in a more responsible manner.
Another clear and easily mitigated failure is the case of the event that never happened. On October 31, 2024, thousands of Irish revellers showed up in Dublin City Center for a three-hour Halloween parade. All the attendees had discovered an advertisement online publicizing the event. Unfortunately, the parade was never planned and turned out to be an AI-generated hoax.

According to Defector.com, who investigated the phony invitation, it was generated by a website owned by a Pakistan-based content bureau that’s dedicated to promoting Halloween globally. It seems the website was exploiting Google algorithms to create Halloween events in order to promote My Spirit Halloween costume shops and manipulate ad revenue. Defector.com, a media and sports blogging site, determined that My Spirit Halloween was using AI to automatically generate listings for every city under the presumption that many cities would have related festivals or events.
Unfortunately, as inconvenient as this was for the people who showed up in Dublin, according to Defector, this is one of how AI can manipulate the Google algorithms. In that way, it’s likely not the last time we’ll see something like this.
Interestingly, the company behind the fake event invitation decided to list the event as “cancelled” as opposed to fraudulent. It’s as though they had a party, and no one showed up as opposed to the opposite.
There are so many examples of accidental or conscious AI fraud in marketing. While Tumeryk’s AI Trust Score™ can provide the solution to identify and stop accidental AI fraud, there will always be an element that will purposely use AI to their advantage. This leads to a wholly separate but important conversation around ethical uses of AI and responsible AI governance.
For those with higher ethical and responsibility standards, Tumeryk offers an AI Trust Score™, modeled after the FICO® Credit Score, for enterprise AI application developers. This tool helps identify underperforming AI models and establishes automated guardrails to assess, monitor, and mitigate these models before they affect consumers or the public. The Tumeryk AI Trust Score™ is the ultimate in AI Risk Assessment, Mitigation, and Management.