In a new HEPI Policy Note, Provide or punish? Students’
views on generative AI in higher education (HEPI Policy
Note 51) by Josh Freeman, HEPI and Kortext explore students’
attitudes to new generative AI tools like ChatGPT and Google
Bard.
Since ChatGPT was released in November 2022, there has been an
explosion of interest in generative artificial intelligence
(GenAI) tools, which are capable of creating new content such as
text, images and video. Many people are excited by GenAI’s
potential to enhance learning, support students and reduce both
student and staff workload. But there is equal concern over a
potential epidemic of AI-based cheating.
This is the first UK-wide study to explore students’ use of
generative AI since ChatGPT was released. We partnered with
Kortext, the digital content, student experience and engagement
data experts and creators of generative AI tool Kortext Premium,
to lift the lid on how students think about and use new GenAI
technologies.
We polled over 1,200 undergraduate students through UCAS, with
results weighted to be representative of the current student
population. We find that the use of generative AI has become
normalised in higher education. Most students have used an AI
tool to support their studies and universities are generally
considered reliable at identifying work produced by GenAI.
However, students want not just clear policies but also support
with using generative AI to help them with their studies.
Our key findings include:
- More than half of students (53%) have used generative AI to
help them prepare assessments. The most common use is as an ‘AI
private tutor’ (36%), helping to explain concepts.
- More than one-in-eight students (13%) use generative AI to
generate text for assessments, but they typically edit the
content before submitting it. Only 5% of students put
AI-generated text into assessments without editing it personally
– which we expect will be prohibited by most institutions.
- More than a third of students who have used generative AI
(35%) do not know how often it produces made-up facts, statistics
or citations (‘hallucinations’).
- A ‘digital divide’ in AI use may be emerging, with male
students, students from the most privileged backgrounds and
students of Asian ethnicity much more likely to have used
generative AI than other students.
- A majority of students consider it acceptable to use
generative AI for explaining concepts (66%), suggesting research
ideas (54%) and summarising articles (53%), but only 3% think it
is acceptable to use AI text in assessments without editing.
- A majority of respondents (63%) think their institution has a
‘clear’ policy on AI use, with only 12% thinking it is not clear.
Two-thirds of students (65%) also think their institution could
spot work produced by AI.
- Students think institutions should provide more AI tools.
While three-in-10 (30%) agree or strongly agree their institution
should provide such tools, fewer than one-in-10 (9%) say they
currently do so.
- Only a fifth of students (22%) are satisfied with the support
they have received on AI. Most students (62%) are neutral or say
they do not know.
- Nearly three-quarters (73%) expect to use AI after they
finish their studies. They most commonly expect to use it for
translating text (38%), enhancing written content (37%) and
summarising text (33%). Only a fifth of students (19%) expect to
use it for generating text.
The report further recommends that:
- Institutions should develop clear policies on what AI use is
acceptable and what is unacceptable.
- Where AI has benefits, institutions should teach students how
to use it effectively and how to check whether the content it
produces is of high quality.
- To prevent the ‘digital divide’ from growing, institutions
should provide AI tools for those who cannot afford them to aid
learning.
- The Department for Education (DfE) and devolved
administrations should urgently commission reviews to explore how
academic assessment will be affected by AI.
Josh Freeman, Policy Manager at HEPI and author of the
report, said:
As the dust settled after the launch of ChatGPT, some were
predicting the end of assessment as we know it. But so far,
higher education institutions have upheld standards of rigour,
and they deserve credit. Students trust institutions to spot the
use of AI tools and they feel staff understand how AI works. As a
result, rather than having AI chatbots write their essays,
students are using AI in more limited ways: to help them study
but not to do all the work.
However, action is urgently needed to stop a new ‘digital
divide’ from growing. AI tools are still new and often unknown.
For every student who uses generative AI every day, there is
another who has never opened ChatGPT or Google Bard, which gives
some students a huge advantage.
The divide will only grow larger as generative AI tools
become more powerful. Rather than merely adopting a punitive
approach, institutions should educate students in the effective
use of generative AI – and be prepared to provide AI tools where
they can aid learning.
Robin Gibson, Director of External Affairs at Kortext,
said:
Clearly the adoption of generative AI among the student
population has been rapid and largely positive. Kortext was one
of the first edtech companies to provide a bespoke generative AI
solution for higher education, mitigating the uncertainties
expressed by students in this research. We look forward to
continuing to empower students with tools that foster a dynamic
learning environment, to prepare future leaders for a world
defined by limitless possibilities.
Notes for Editors
- Kortext kindly provided some financial support for this
project. However, full editorial control was retained by HEPI.