Law Firm Alliance

News, Insights & Events

Law Firm Risk Management Alert: Misadventures in ChatGPT: Lessons Learned

July 6, 2023

Law Firm Risk Management Alert: Misadventures in ChatGPT: Lessons Learned

View Full Article

Early in 2023, New York lawyer Steven A. Schwartz found himself in a bind when faced with a motion to dismiss an action he had commenced in state court that was subsequently removed to the District Court for the Southern District of New York.  Schwartz had no experience with the issues raised in the motion to dismiss, his firm did not have a Westlaw or LexisNexis account, and his firm’s Fastcase account provided only limited access to federal caselaw.  So to prepare his opposition to the motion to dismiss, Schwartz opted to rely on an internet site he had heard about from press reports and family members: ChatGPT. 

Without understanding how ChatGPT worked – he believed it functioned as a “super search engine” – Schwartz prepared an opposition pleading that relied on citations and summaries ChatGPT generated in response to a series of prompts.  Schwartz did not, apparently, make any effort to obtain and analyze the decisions ChatGPT identified or even to confirm that any of the cited authority existed.  Because Schwartz was not admitted in the District Court, his law firm colleague Peter LoDuca had appeared on behalf of the firm’s client after the case was removed from state court.  Accordingly, it was LoDuca who signed and filed the March 1 “Affirmation in Opposition” to the motion to dismiss, and he did so without any review of the cited authority or inquiry to Schwartz about his research or contrary precedent.

In its reply, the defendant pointed out that the cited cases appeared to be non-existent.  After the court did its own research, and was similarly unable to locate the cited authorities, it issued two orders directing LoDuca to file an affidavit annexing copies of the cited decisions.  Though alerted by both opposing counsel and the court that there was a significant problem with the opposition submission, neither lawyer took what should have been the obvious step of reconsidering the trustworthiness of the responses ChatGPT had generated.  Nor did they withdraw the challenged, and critically flawed, submission.  Instead, after obtaining an extension of time based on what the court subsequently deemed a misrepresentation, LoDuca filed an affidavit Schwartz prepared and which annexed only the ChatGPT summaries rather than any actual case decisions, as the court had directed.

On June 22, 2023, two weeks after the June 8 hearing at which the two lawyers had the opportunity to explain their conduct, the court issued its Opinion and Order on Sanctions (“Opinion”) The court found that the lawyers had “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”  Noting the “[m]any harms [that] flow from the submission of fake opinions” – including that it “promotes cynicism about the legal profession and the American judicial system” – and making multiple findings of bad faith of the part of both lawyers involved, the court, pursuant to Rule 11 and its inherent power, imposed sanctions on both lawyers and their law firm.

Here are some of the lessons lawyers and law firms should take from the ChatGPT case:

First, and perhaps most basic: do not use technology without understanding its limitations.  As provided in the Commentary to Rule 1.1, a lawyer’s fundamental duty of competence includes the obligation to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”  Here, the lawyer clearly failed to meet that standard.  The problem was not that he used ChatGPT: the court found that there was nothing “inherently improper” about using the technology.  Rather, the real problem was the lawyer initially used ChatGPT without understanding its limitations.  He then compounded that error by continuing to insist that he did not understand that ChatGPT could produce fictitious cases even though both opposing counsel and the court confronted him with the fact that he had relied on authority that simply did not exist.

Second: Don’t take on a matter where you do not have the requisite experience and/or your law firm lacks the necessary resources to provide competent representation.  The court found that there was no evidence that Schwartz had knowledge of or experience with the legal federal law questions at issue, and the record established that his firm lacked research resources for a federal court matter.  Presumably, if Schwartz had even some knowledge of the applicable law, he would have more readily been able to ascertain that ChatGPT had given him fictitious authority.  

Third: If you make a mistake, don’t try to get away with pretending that you haven’t.  The court pointedly noted that the situation would have been much different “if the matter had ended with Respondents coming clean about their actions shortly after they received the defendant’s March 15 brief questioning the existence of the cases, or after they reviewed the Court’s Orders . . . requiring production of the cases. . . .  Instead, [they] doubled down and did not begin to dribble out the truth until . . . after the Court issued an Order to Show Cause” why they should not be sanctioned.  Reading the court’s Opinion, it is hard to escape the conclusion that the lawyers found themselves in a situation that they could have avoided without sanction had they offered an appropriate and timely acknowledgement of a mistake.  Instead, forgetting that the first rule when you find yourself in a hole is to stop digging, they proceeded to dig themselves into a deeper and deeper hole.

Fourth: Don’t sign an affidavit attesting to matters of which you have no personal knowledge.  LoDuca executed and filed an affidavit purporting to annex the case decisions as ordered by the court.  But it was Schwartz who authored the affidavit; LoDuca “had no role in its preparation and no knowledge of whether the statements therein were true,” and there was “no evidence that Mr. LoDuca asked a single question.”  The bad faith findings against LoDuca included the finding that he “violated Rule 11 in swearing to the truth of the April 25 Affidavit with no basis for doing so.  While an inadequate inquiry may not suggest bad faith, the absence of any inquiry supports a finding of bad faith.”

Fifth, and though it shouldn’t need saying, apparently it does: Don’t dissemble to the Court.  The court called out the ways in which the lawyers misled the court.  For example, in seeking an extension of time, LoDuca represented that he was out of the office on vacation.  Not only was that untrue, “[t]he lie had the intended effect of concealing Mr. Schwartz’s role in preparing the March 1 Affirmation and the April 25 Affidavit and concealing Mr. LoDuca’s lack of meaningful role in confirming the truth of the statements in his affidavit.”   In a May 25 affidavit, Schwartz represented to the court that he had relied on ChatGPT “’to supplement the legal research’” (emphasis in court’s Order).  But based on Schwartz’s testimony at the June 8 hearing, the court concluded that the representation was “a misleading attempt to mitigate his actions by creating the false impression that he had done other, meaningful research on the issue and did not rely exclusively on an AI chatbot, when, in truth and in fact, it was the only source of his substantive arguments.”  And laying out the specific facts contrary to one contention Schwartz made in his June 6 Declaration, the court also rejected Schwartz’s “highly dubious claim” that prior to receipt of the May 4 Order to Show Cause, he “’could not fathom that ChatGPT could produce multiple fictitious cases.” 

Conclusion
Law firm risk managers should develop and implement protocols for their colleagues’ use of generative AI tools like ChatGPT.  Some steps to consider include the following:

  • Determine whether the lawyer requesting approval to use the AI tool has sufficient background and knowledge of the tool’s potential deficiencies to satisfy the ethical duty of competence.
  • Ask the requesting lawyer to provide confirmation of the reliability of the proposed AI tool for brief writing projects, including the accuracy of case citations.
  • Determine whether any outside counsel guidelines require the firm to obtain the client’s written consent to the proposed use of an AI model or tool.

Pullman & Comley’s Professional Liability practice provides risk management advice and counseling to both law firms and corporate legal departments. If you have any questions related to this alert or questions related to professional liability, risk management or legal ethics, please contact David P. AtkinsMarcy Tench Stovall or Dana M. Hrelic.

 

© 2024 Law Firm Alliance . All Rights Reserved.