Background
The case arises from an immigration appeal in which a self employed barrister, Mr Muhammad Mujeebur Rahman, drafted grounds of appeal that relied on a non existent authority ‘Y (China)’ while using the neutral citation that in fact belongs to a different Court of Appeal case. At an Upper Tribunal hearing in June 2025, when pressed to identify the relied upon passage and explain the mismatch, he stepped away from the genuine authority and later asserted that, following ‘ChatGPT research’, the fictitious case was real. The Tribunal then directed him either to produce a copy of the alleged judgment or to explain what had occurred, and the issue became a professional conduct matter concerned with misleading material and wasted tribunal resources.
AI interaction
The AI issue is not judicial use of AI, but representative use of a large language model in legal work product and research. The decision’s headnote warning is explicit that ‘AI large language models such as ChatGPT can produce misinformation including fabricated judgments complete with false citations.’ In the factual narrative reported from the judgment, the barrister told the Tribunal that ‘he had undertaken ChatGPT research during the lunch break and the citation for Y (China) was correct, and it was a decision made by Pill and Sullivan LJJ and Sir Paul Kennedy.’ The Tribunal also described the post hearing material handed up as ‘nine stapled pages’ that were not the Court of Appeal judgment but an internet print out containing misleading statements and references, rather than a verifiable authority from reputable sources.