Team Blitz India
More than half of UK students (53 per cent) have used generative AI for preparing assessments. The most common AI use is the ‘AI private tutor’ (36 per cent), used to explain concepts, says a recent study.
“Many people are excited by GenAI’s potential to enhance learning, support students and reduce both student and staff workload. But there is equal concern over a potential epidemic of AI-based cheating,” it noted.
In a new Higher Education Policy Institute (HEPI) Policy Note, ‘Provide or punish? Students’ views on generative AI in higher education’ by Josh Freeman, HEPI and Kortext explore students’ attitudes to new generative AI tools like ChatGPT and Google Bard.
It is claimed to be the first UK-wide study to explore students’ use of generative AI since ChatGPT was released. The UK’s independent thinktank polled over 1,200 undergraduate students through Universities and Colleges Admissions Service (UCAS), with results weighted to be representative of the current student population.
UCAS is a charity and private limited company based in Cheltenham, Gloucestershire which provides educational support services. The study found that the use of generative AI has become normalised in higher education. Most students have used an AI tool to support their studies and universities are generally considered reliable at identifying work produced by GenAI.
However, students want not just clear policies but also support with using generative AI to help them with their studies.
More than a third of students who have used generative AI (35 per cent) do not know how often it produces made-up facts, statistics or citations (‘hallucinations’), it added. A ‘digital divide’ in AI use may be emerging, with male students, students from the most privileged backgrounds and students of Asian ethnicity much more likely to have used generative AI than other students, the paper noted.
The report recommended that institutions should develop clear policies on what AI use is acceptable and what is unacceptable; where AI has benefits, institutions should teach students how to use it effectively and how to check whether the content it produces is of high quality; to prevent the ‘digital divide’ from growing, institutions should provide AI tools for those who cannot afford them to aid learning.
The Department for Education (DfE) and devolved administrations should urgently commission reviews to explore how academic assessment will be affected by AI, it further suggested.




