Today’s guest post is courtesy of Emma, a lecturer who spends too much of her time marking, advising, and trying to keep pace with changing academic rules. Lately, she’s been struck by a curious contradiction: students may openly acknowledge using ChatGPT in their assignments, but the very same universities come down hard on anyone who consults a commissioned model answer. In her view, the line between the two is far less clear than policy suggests.
I teach law at a Russell Group university, and I’m trying to square a circle. Our sector is increasingly comfortable with students using generative AI – so long as they say so.
At the same time, we continue to ban “assignment writing services” outright. I’m struggling to see the principled line between crediting a chatbot that can draft whole paragraphs and crediting a model answer commissioned as a study aid. If the student writes their own submission, why is one framed as “literacy” and the other as “misconduct”?
The reason I’m having this rant is one of our students – not one of mine I might add – got pulled up recently for using an assignment writing service, UKEssays.com. His essay was fantastic, and that was the problem. The poor guy hadn’t achieved more than the low fifties in his past assignments, and this one was graded over 80. Alarm bells rang. He confessed. And I think it’s likely he’ll lose his place
What universities now say
Below are paired policy positions from UK universities including mine (I’m not telling you which, to avoid personal backlash): A) they permit some use of GenAI with acknowledgement; B) they prohibit contract cheating / essay‑mills (commissioned text). I’ve chosen places where both statements are explicit.
Oxford
- Students may use GenAI to support study skills, with critical appraisal. University of Oxford
- Student Handbook defines academic misconduct (including unauthorised AI) and notes essay‑mills are criminalised; misconduct procedures apply. University of Oxford
Cambridge
- Students can use GenAI for personal study; permitted use in assessments must be clearly acknowledged (sample declaration provided). Blended Learning Service
- Staff guide lists contract cheating as commissioning work from a third party (e.g., an essay mill). studentcomplaints.admin.cam.ac.uk
UCL
- University‑wide guidance: if you use GenAI in assessed work, acknowledge it; UCL does not use AI detectors when marking. University College London
- UCL operates zero tolerance on essay mills / contract cheating (up to expulsion). University College London+1
King’s College London
- Students are told to include a declaration acknowledging any GenAI use. King’s College London
- Academic Misconduct Policy references the criminal offence of providing/advertising contract‑cheating services; internal procedures treat this as misconduct. King’s College London
Manchester
- Staff guidance: students should declare GenAI use; submitting GenAI‑written text as one’s own is treated as plagiarism under the Academic Malpractice Procedure. StaffNet
- Academic Malpractice documents define contract cheating (including essay mills) and treat it as misconduct. University of Manchester Documents
Edinburgh
- “Does not ban the use of GenAI” for study; assessment use is restricted and must be acknowledged. Information Services
- University webpages define ghostwriting/essay mills as “contract cheating” and plagiarism. Student Administration
Bristol
A) Local guidance advises staff on allowing/encouraging certain AI tools; library pages tell students to acknowledge AI tools like any source. University of Bristol
B) “Contract cheating” page explicitly lists using AI to complete all or part of an assessment as contract cheating unless the task permits it; dedicated procedure in place. University of Bristol
If you’re sensing the pattern: declare AI (in some settings) = acceptable; commission text = misconduct. That pattern is not accidental. In England and Wales it is now a criminal offence to provide or advertise contract‑cheating services to students studying at English Universities (the “essay‑mill ban” in the Skills and Post‑16 Education Act 2022 – Legislation.gov.uk). The liability lands on providers, not on students who use them — but universities, understandably, take a hard line in policy.
“But surely you can credit a model answer too?”
Here’s where I don’t quite buy the tidy dichotomy. A small number of model‑answer companies (e.g., UKEssays.com, LawTeacher.net, NursingAnswers.net, UKDiss.com…) publish a Fair Use Policy telling students not to submit the purchased work; use it as a worked example to research around and then write your own (see e.g. UKEssays.com). That is, in spirit, the same “scaffold, then cite/acknowledge” logic our AI guidance leans on.
From a learning‑science point of view, this isn’t outlandish. Two long‑standing ideas support ethical use of model answers as study tools:
- Worked examples effect (cognitive load theory) — novices learn efficiently by studying well‑structured worked solutions before doing independent problems; it reduces unproductive search and helps schema formation.
- Scaffolding / zone of proximal development — expert support that gradually fades enables learners to perform beyond their current independent level, internalising strategies as they go.
In other words: if a student reads a high‑quality model answer, checks the sources, and then writes their own original submission, they’re using a worked example – not outsourcing authorship. That’s the same defence we accept when a student uses GenAI to plan or brainstorm and then writes the final piece, acknowledging the tool. I actually have NO problem with either.
Of course, this is not what the poor chap did who now faces eviction from our University. He handed in his essay without tweaking so much as a comma. A complete waste of our time, and a huge waste of his money. But he could have used the assignment properly. I had the benefit of glancing it over and it was really rather good – I would gladly have offered it as an exemplar myself, were it my subject.
So why the asymmetry? Legally, the essay‑mill ban criminalises the supply side, so universities are wary of normalising any relationship with commercial providers — but wait! This abhorrence for essay mills is nothing new, it existed long before the ban. So I can’t really understand the difference myself – perhaps it’s because ChatGPT is just everywhere and they’ve realised they can’t beat it.
Ethically, the authorship line feels clearer with a tool than with a human‑produced text. After all, you’d be frankly daft to hand in ChatGPT content without so much as checking it. But pedagogically, “model answers used correctly” and “GenAI used correctly” can both be legitimate scaffolds, again in my view.
And yet, ChatGPT isn’t a barrister…
There’s also a quality point. A commissioned model answer in law is (in theory) written by a subject‑qualified person; chatbots are language models with known accuracy issues. The legal domain is particularly sensitive:
- Stanford HAI reported that general‑purpose chatbots hallucinated on 58–82% of legal queries; fine‑tuned legal tools still hallucinated on ~1 in 6 benchmarked queries – Stanford HAI
- A 2024 peer‑reviewed study of reference accuracy found hallucinated citations in 28.6% of GPT‑4 outputs (and 39.6% for GPT‑3.5; Bard was worse) in a scholarly context – PMC
- The legal system has seen repeated sanctions for fake AI‑generated case citations — dozens of incidents documented in 2024–25 – The Washington Post
Yes, better prompting and retrieval pipelines reduce error rates in specific workflows, but the broad conclusion holds: GenAI can produce fluent nonsense, and in law that’s a big hazard – Nature
What I tell my students
If you need support, say so. I can provide model answers after marking; I can show you how to use them as worked examples; I can advise on responsible, acknowledged use of AI tools. What I can’t sanction is outsourcing authorship – whether to a stranger on the internet or to a chatbot that writes your paragraphs for you against the brief. That’s the red line our regulations still draw.
At the same time, I’d welcome a sector conversation that’s more consistent:
- If we allow GenAI with acknowledgement for planning, summarising, or generating checklists, we could just as reasonably allow model answers as worked examples — provided there’s no copying, students verify sources, and the final submission is entirely their own. This aligns with QAA’s emphasis on designing assessments that reduce incentives for contract cheating, rather than relying purely on prohibition — Quality Assurance Agency
- Where departments do allow GenAI in assessment, they already require a clear declaration of use. The same idea could cover studied model answers: require disclosure (e.g., “I reviewed X model answer to understand structure; I did not copy text; all sources independently verified”). Even require that they provide a copy of the model answer with the submission. Universities already publish sample AI declarations; adapting these is straightforward.
Where I land:
I don’t condone ghostwriting. But I also don’t pretend that a fluent paragraph from a chatbot magically becomes “your voice” just because you wrote “used ChatGPT” in an appendix. If anything, a qualified human‑authored model answer – used as a worked example, not as your submission – may be safer pedagogically than pasting from a system with a documented appetite for hallucination. Just make sure your final work really is your own.