Process over product

Integrating ChatGPT as collaborator into an assessment design for academic integrity and digital literacy purposes

  • Julian Owen Harris Centre for the Study of Higher Education, University of Melbourne
Keywords: GenerativeAI, academic integrity, digital literacy, university assessment, ChatGPT


The launch of OpenAI’s ChatGPT model in late 2022, as most Australian universities wound down for summer holidays, elicited varied responses from higher education practitioners, policy makers and commentators that ranged from heightened concern and proscriptive impulses through to cautious excitement about the potentially disruptive, deceptive impact of university student use of AI chatbots (Skeat and Ziebell, 2023).


Generative AI has both transformative and disruptive implications for conventional university assessment practices. Simultaneously, we observed a tension between university teaching and learning imperatives of digital literacy, academic integrity, student employability, and data security and privacy.

Large Language Models (LLMs) run on deep learning programming, trained to process data in a way modelled on human brain cognition, to generate human-like responses to natural language prompts. Generative AI can answer and compose questions, write narratives, summarise documents, and construct essays, reports, reviews etc, and perform reflective writing capabilities (Li et al., 2023). Importantly, generative AI performs these tasks with substantially different degrees of accuracy, biases, and relevance potentially with each prompt.


These dynamic and iterative learning abilities have significantly, sometimes imperceptibly, compromised the integrity and reliability of conventional university assessment types. Moreover, generative AI is improving incrementally, increasingly integrated into everyday software, platforms and apps (Liu and Bridgeman, June 2023). Nor is it only traditional written assessments that are at risk of disruption and invalidation. AI image generators like OpenAI’s DALL-E can produce high-quality, realistic and fantastical artworks.


ChatGPT-like AI models are designed for conversational and dialogic user experiences, programmed on natural and intuitive patterns of language use. Even without any targeted training in ethical, effective and critical ‘prompt engineering’ (cf. Liu, 2023) students can output passable assessment content.


As well as concerns around digital literacy, academic integrity and meaningful learning, prompting, performed rudimentarily at least, blurs the lines between a student’s original thinking (and integration of sources) and machine-generated output. The foundational challenge being in determining whether a student's submission is a result of their applied understanding or the AI's algorithmic capabilities. Yet, this GenAI interactional, iterative user experience can also be harnessed by educators to design, facilitate and assess socially constructivist, authentic, analytical, and innovative approaches to student learning (Liu and Bridgeman, June 2023).


We report on a research project that implemented an iterative, nested, and collaborative assessment redesign (Lodge et al. 2023) as an alternative to a 2000-word Final Research Report due in the semester’s penultimate week. For the redesign, we partially broke the one submission down into three, smaller critical reflections due across a semester. For the first,  students used ChatGPT before, and then after, learning a prompt engineering approach (cf. Liu, 2023). Secondly, students reflected on their engagement with generative AI as a collaborator in comparison to their collaboration with peers on a task. The final critical reflection required students to anticipate how generative AI might impact their professional practices drawing on the subject’s key topics.


With ethics approval granted, our research findings are drawn from the roughly 10% of all students (n = 83) that chose the redesigned option. We analyse their three submissions in terms of existing themes in the literature (cf. Skeat and Ziebell, 2023) around academic integrity, digital literacy, institutional messaging and student belonging, and generative AI as ‘study buddy’ (Skeat and Ziebell, 2023).


Download data is not yet available.


Metrics Loading ...
How to Cite
Harris, J. O. (2024). Process over product: Integrating ChatGPT as collaborator into an assessment design for academic integrity and digital literacy purposes . Pacific Journal of Technology Enhanced Learning, 6(1), 16-17.
SoTEL Symposium 2024