Background
This was an appeal under section 87(2) of the Immigration and Asylum Act 1999 against the Immigration Services Commissioner’s refusal of Mr Folarin’s 1 August 2025 application for registration at Level 1 to provide immigration advice and services through DSN Global Immigration Lawyers. The refusal was based on the Commissioner’s conclusion that, despite Mr Folarin having passed the Level 1 competence assessment, he had not shown that he was fit to practise, given a history of serious convictions including offences involving dishonesty, violence, firearms, counterfeit currency, drug possession, driving while disqualified, and breach of a community order. The Tribunal treated the appeal as a full merits appeal, considered all the evidence afresh, and ultimately upheld the refusal, finding that the appellant had not demonstrated present fitness to provide immigration advice or services.
AI Interaction
The AI-related aspect arose during the hearing when the Tribunal questioned authorities cited by Mr Folarin in his legal submissions. He accepted that he had used ChatGPT, alongside Westlaw, to identify purportedly relevant cases, generate extracts and summaries, and polish wording, and that he had not read the underlying judgments himself. The Tribunal found that several cited authorities could not be located and regarded this as creating a real risk of misleading the Tribunal. That concern became part of the fitness assessment itself: the Tribunal held that reliance on unverified AI-generated legal authorities, particularly in a context where immigration advisers deal with vulnerable clients and legal proceedings, raised serious concerns about integrity, care, and the risk that a similar approach might be taken in advising clients. The Tribunal expressly linked this to the statutory objective of ensuring advisers do not knowingly mislead courts or tribunals, and referred to R (Ayinde) v London Borough of Haringey [2025] EWHC 1383 (Admin) on the dangers of placing false AI-generated material before a court.
Notes:
- The case is notable because AI use was not merely criticised as a procedural problem in submissions. It was treated as evidence relevant to the appellant’s regulatory fitness and integrity.
- The Tribunal did not hold that AI use in legal research is improper in itself. The concern was the use of AI-generated authorities without verification, especially where some authorities appeared not to exist.
- The decision shows how AI misuse can migrate from an advocacy issue into a professional-regulation issue where the person seeks authorisation to advise others in legal matters.