advertisement

colby cosh: did trump’s former lawyer get screwed by chatgpt?

don't hire a robot to do a lawyer's job

did trump’s former lawyer get screwed by chatgpt?
postmnedia files; peter morgan/ap photo

in april, i wrote a warning in this space about the trendy ai application chatgpt, observing that in the age of the artificial-intelligence takeoff, we are relearning old lessons about the conjoined brilliance and stupidity of computers. readers my age who began fooling with personal computers in the 1980s probably remember initial feelings of power and promise as we plumbed the delights of basic or visicalc. they will also remember discovering how infuriatingly literalist and fussy computers are, and how they had absolutely no understanding what you were trying to accomplish at any given moment. one misplaced keystroke was always enough to make a program crash or a datafile disappear or a working algorithm start outputting garbage.

the “machine in the likeness of a human mind” has now, in our time, mastered the convincing use of natural human language. computers can now produce a human-sounding essay or technical report or newspaper column upon being asked to do so in human language; we can even produce snippets of code in many software languages by simply making a natural-language request.
but there has been no change in principle to the underlying nature of computers, and, as i showed in my april column, their use and creation of human language is fundamentally imitative. in trying to sound convincing, these bundles of circuits are quick to begin generating human-sounding gibberish that has no relationship to the world whatsoever. chatgpt and its brethren are “intelligent” in almost every definable way, but they are also, and i mean this very literally, insane.

advertisement

advertisement

this is something worth keeping in mind if you have hopes of collaring ai software and putting it to work on cognitive tasks in a workplace setting. you shouldn’t trust one with an assignment you wouldn’t give to a smart but insane person. this is what i was getting at in the close of my april essay: “a lot of the immediate danger of these artificial intelligences is bound to come from people misunderstanding their nature and asking them for medical advice, or cyrano-like seducer witticisms, or mission-critical software snippets.”

and now, in december, i don’t know whether to be proud of this forecast — or disappointed that i didn’t think of all my lawyer friends who can’t wait to start farming out humdrum research tasks to the robots. in june, there was an early high-profile example in u.s. federal court, when some new york lawyers were fined by a judge for presenting hallucinatory “gibberish” full of made-up case references. chatgpt proved to be the ultimate culprit. and now something of the sort seems to have happened in a much more newsworthy setting involving michael d. cohen , a former lawyer for donald trump.

cohen was convicted in 2018 of tax evasion and campaign-finance offences, and ended up serving a three-year prison sentence — or, more accurately, an in-and-out-of-prison sentence, because he got shifted to house arrest during the covid pandemic but got caught sneaking out to restaurants and was sent back to the pen. in november, cohen’s lawyer, david m. schwartz, asked the court for an order terminating court oversight of cohen’s supervised release. the schwartz brief cited three helpful examples of courts granting early terminations.

unfortunately for schwartz and cohen, u.s. district judge jesse furman double-checked the citations. if you’re a lawyer, you definitely ought to check out the result , if only in the spirit of someone buying a horror-movie ticket. “as far as the court can tell,” judge furman says tersely, “none of these cases exist.” the bibliographical pointers in the schwartz brief point zanily in many directions, none of them relevant. cohen has a new lawyer who also can’t track down the citations, which look too much like chatgpt hallucinations for anyone to ignore that possibility.

advertisement

advertisement

furman has called schwartz on the carpet, asking him to provide hard copies of the cited decisions or explain why he shouldn’t be subject to exemplary punishment. of course, maybe there’s some other explanation for the fake citations — but surely any other explanation could only be worse for schwartz and cohen; the likelihood is that they’d blame chatgpt even if it weren’t involved, which it almost certainly is. artificial intelligences are intelligent but insane: now is not a moment too soon for you to understand this.

national post
twitter.com/colbycosh

sign up for colby cosh’s newsletter, np platformed, delivered straight to you inbox monday-thursday by 4 p.m. et.

comments

postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. comments may take up to an hour for moderation before appearing on the site. we ask you to keep your comments relevant and respectful. we have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. visit our community guidelines for more information and details on how to adjust your email settings.