Even the most sophisticated AI tools are prone to inaccuracies. Relying too much on them can lead to, at best, attorney embarrassment, and at worst, permanent professional damage and high-cost mistakes.
The Mata v. Avianca Disaster
In 2019, Roberto Mata sued Avianca Airlines for injuries from a serving cart, claiming employee negligence. Steven Schwartz, an attorney with over three decades of experience, handled Mata’s case.
The filing was submitted in 2023, and to save time, Schwartz enlisted the help of the new AI software ChatGPT to aid in the research. The defense provided fake quotes and citations generated by ChatGPT and continued to stand by the claims after the information was brought into question. Judge P. Kevin Castel reacted to the tactic by imposing a penalty of $5,000 on the Respondents.
In the resulting affidavit, fellow attorney Peter Loduca claimed the law firm had “no reason to doubt the sincerity” of the research. This embarrassment ranks among the initial challenges faced when implementing AI chat knowledge.
The Minhye Park v. David Dennis Kim Case
In January 2024, Judge Bloom ordered numerous discovery orders after determining that a medical malpractice suit contained false information. The attorney included a non-existent state court decision regarding a Queens abortion case.
The attorney failed to confirm that the case she cited in the suit was valid, resulting in disciplinary action by the Manhattan-based appeals court.
She later admitted that the case was “suggested” by ChatGPT, but it did not contain any additional reasoning or decision regarding the case.
AI’s Fatal Flaw: Hallucination Without Hesitation
Although AI acts as a search engine for information on the internet, it does not always produce completely realistic answers. AI systems “hallucinate” falsehoods and present them as clear facts with absolute confidence. Users who do not double-check the claims made by AI are often subject to significant punishments for inaccuracies.
These cases represent a significant trend toward trusting unverifiable sources with complete confidence. Fast and easy solutions often replace well-thought-out and researched answers.
It has created enough of a stir that AI industry leaders have acknowledged this fundamental limitation. OpenAI’s own documentation states that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” This warning should stop any attorney from using AI as their exclusive source for legal research, but that does not always happen.
The Disciplinary Hammer on AI Misuse
Many new cases have been scrutinized because of AI’s potential risks. In response, the American Bar Association updated its Model Rules of Professional Conduct to specifically mention AI. The new ethics code warns against the use of generative AI and establishes practical guidance for its use.
In addition, multiple State Bar associations have provided explicit guidance regarding artificial intelligence. California’s State Bar specifically states that attorneys must understand AI’s limitations and maintain a level of competence to distinguish between real and imagined information. Anyone who disregards ethical rules associated with AI could face disciplinary action.
Client Confidentiality Compromise
Recent data suggests that 2024 also marked an unprecedented surge in data breaches, specifically related to law firms. It has highlighted the problems with law firm data management systems, suggesting that AI may be able to breach cybersecurity measures through AI vulnerabilities.
Because law firms handle sensitive client data, it casts doubt on AI systems in particular. Many commercial AI systems store all queries to improve their systems. If attorneys input too much information into commercial AI systems, the information could potentially be accessed by third parties or be incorporated into the AI’s future responses.
Malpractice Time Bomb
Any issues found within AI malpractice can lead to devastating results for both the law firm and its clients. Critical errors within an attorney’s defense can completely negate its legitimacy.
Courts have made it very clear that lawyers cannot delegate their professional judgments to AI systems. Though AI has been used as a research tool, it scarcely has the ability to thoroughly scrutinize and give ethical judgments within courtrooms.
Battle-Ready Strategies for AI Integration
The cautionary tales associated with AI lie primarily in the complete acceptance of information given. It has been shown that, like any powerful tool, AI requires mastery over blind trust.
But it is impossible to deny that AI will soon become a fixture in many industries, including law. For businesses and their legal counsel, maintaining a set of rules and strategies to avoid pitfalls will make the difference between ethical and unethical practices.
At Mestaz Law, we do not use AI. However, we commit to performing the following best practices to maintain the utmost reliability in our services:
- Verify Everything: Instead of putting absolute confidence in AI, we know to treat it like a legal research testimony from a biased witness. We know to rigorously fact-check every statement before relying on it.
- Secure Client Information: We are prepared to differentiate between standard AI and AI tools with enterprise-grade security and maintain clear data retention policies that protect confidentiality.
- Know the Enemy: We understand that AI systems can generate inaccuracies with absolute confidence, and we question everything they produce.
- Disclose Appropriately: Our ethics and law practice require the ethical disclosure of AI’s role in legal proceedings.
- Stay Combat-Ready: We remain current on ethical guidelines and best practices as AI technology continues to advance rapidly.
Winning With AI, Not Losing Because of It
AI may offer a remarkable potential, but with recent cases, it is obvious that it is not ready yet. And it may never be.
The attorneys who suffered sanctions, malpractice claims, and professional impairments all made the same mistake: They trusted too much in AI systems without adequately supervising or verifying the results. It is becoming ever clearer that an experienced council that understands the capabilities and limitations of technologies is more necessary than ever.
Anyone looking to use AI in future legal strategies must remain vigilant. Any technology should serve the interests of clients rather than create new and potentially catastrophic risks. Like many other industries, legal practice will likely become AI-enhanced, but it will always remain human-directed.
- Category: Commercial Litigation
- By Daniel Mestaz
- May 15, 2025
- Leave a comment