Tumeryk
  • AI Trust Score™
  • Company
    • About us
    • Contact us
  • Products
    • Gen AI Firewall
    • Trust Studio
    • Enterprise Monitoring
  • Resources
    • Press Release
    • Spotlights
    • Blog
    • Cost Containment
    • Developers Hub
No Result
View All Result
Tumeryk
  • AI Trust Score™
  • Company
    • About us
    • Contact us
  • Products
    • Gen AI Firewall
    • Trust Studio
    • Enterprise Monitoring
  • Resources
    • Press Release
    • Spotlights
    • Blog
    • Cost Containment
    • Developers Hub
No Result
View All Result
Tumeryk
No Result
View All Result
Home Gen AI

AI Failures: A failure in legal research

Tumeryk by Tumeryk
April 22, 2025
in Gen AI
0
AI Failures: A failure in legal research
Share on FacebookShare on Twitter

This is part two in our ongoing series on well documented AI failures. With a focus on creating GenAI trust, Tumeryk provides an AI Trust Score™ that allows businesses with consumer-facing interactions to measure, monitor, and mitigate bias and hallucinations in generative AI. Today we cover a well-publicized GenAI failure in the legal industry.

In May 2024, the news broke that attorneys Steven Schwartz and Peter LoDuca of the firm Levidow, Levidow & Oberman will be charged in a New York court for using fake citations from OpenAI’s ChatGPT in legal research for a personal injury case they were handling.

After completing legal precedence research in OpenAI / ChatGPT, the two attorneys presented a series of cases that were entirely made up by the generative AI solution.  The cases ChatGPT presented the lawyer in his research were Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines — all bogus. These cases did not exist and had fake judicial warnings.

Stan Seibert, senior director of community innovation at data science platform Anaconda at the time said “One of the worst AI blunders we saw this year was the case where lawyers used ChatGPT to create legal briefs without checking any of its work,” Mr. Seibert continued, “It is a pattern we will see repeated as users unfamiliar with AI assume it is more accurate than it really is. Even skilled professionals can be duped by AI ‘hallucinations’ if they don’t critically evaluate the output of ChatGPT and other generative AI tools.” Experiences like this undermine the commercial feasibility of generative AI use cases and erode the trust businesses put into automated and customer-facing implementations. These further enforce a scoring solution to quickly identify bias and hallucination in GenAI to minimize potential damage and mitigate further harm.

Tumeryk offers an AI Trust Score™, modeled after the FICO® Credit Score, for enterprise AI application developers. This tool helps identify underperforming AI models and establishes automated guardrails to assess, monitor, and mitigate these models before they affect consumers or the public.  The Tumeryk AI Trust Score™ is the ultimate in AI Risk Assessment, Mitigation, and Management.

See it in action. Book a Demo!

Tags: AI GovernanceAI Risk AssessmentAI Risk ManagementAI Trust ScoreGen AI Security
Tumeryk

Tumeryk

Related Posts

AI Failures: Failures in Marketing
Gen AI

AI Failures: Failures in Marketing

by Tumeryk
May 13, 2025
AI Failures: A failure in content creation
Gen AI

AI Failures: A failure in content creation

by Tumeryk
May 1, 2025
AI Failures: A failure in automated candidate screening
Gen AI

AI Failures: A failure in automated candidate screening

by Tumeryk
April 8, 2025
Next Post
Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

LLM Virtualization

LLM Virtualization Through Role-Based Access Control (RBAC)

September 27, 2024
Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

April 29, 2025
AI Failures: A failure in legal research

AI Failures: A failure in legal research

April 22, 2025
AI Failures: A failure in automated candidate screening

AI Failures: A failure in automated candidate screening

April 8, 2025
AI Failures: Failures in Marketing

AI Failures: Failures in Marketing

May 13, 2025
AI Failures: A failure in content creation

AI Failures: A failure in content creation

May 1, 2025
Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

April 29, 2025
AI Failures: A failure in legal research

AI Failures: A failure in legal research

April 22, 2025
tumeryk

Tumeryk Inc. specializes in advanced Gen AI security solutions, offering comprehensive tools for real-time monitoring, risk management, and compliance. Our platform empowers organizations to safeguard AI systems, ensuring secure, reliable, and policy-aligned deployments.

Recent Posts

  • AI Failures: Failures in Marketing
  • AI Failures: A failure in content creation
  • Tumeryk Now Supports MCP Server: Securing LLM Access with Guardrails and AI Trust Score™

Categories

  • Gen AI
  • Security
AWS
NVIDIA

© 2025 Tumeryk - Developed by Scriptfeeds

No Result
View All Result
  • AI Trust Score™
  • Company
    • About us
    • Contact us
  • Products
    • Gen AI Firewall
    • Trust Studio
    • Enterprise Monitoring
  • Resources
    • Press Release
    • Spotlights
    • Blog
    • Cost Containment
    • Developers Hub

© 2025 Tumeryk - Developed by Scriptfeeds