Assessment design for the two AIs
As part of Academic Integrity Awareness week, LEI and the Academic Integrity team presented a workshop sharing sector best practice, experiences from teaching and assessing in the age of Chat GPT, and ideas for the future of assessment design. Here are some of the key takeaways.
Banning Artificial Intelligence is not the answer!
In the short term conversations around AI and assessment have focused on mitigating the misuse of AI in our assessment and detecting inappropriate use. Of course, it is important to ensure that our students meet learning outcomes and that we assure their skills and knowledge. As TEQSA’s recently released suggest, our students will also need to develop their skills as confident, ethical users of AI, and across the sector educators are already experimenting with integrative approaches to AI in assessment. Â
Artificial Intelligence is highlighting what we already knew about assessment design
While AI has created concern about assessments, in many ways it’s nothing new. Assessments that can be easily outsourced to AI were likely already vulnerable to things like plagiarism or contract cheating. Assessments that require iterative submissions, higher-order thinking and problem solving are more difficult to outsource than those which require descriptive writing or generic information retrieval. AI has highlighted an opportunity to rethink what and how we assess.
No assessment design is completely cheat-proof, but we can raise the cost of cheating
The ‘cost’ of cheating relates to the time, effort and resources that a student must expend in order to cheat on the assessment. Designing an assessment which is difficult to outsource (to a person or to AI), and where appropriate, securing the assessment with technologies that control the assessment environment, can deter academic misconduct of all kinds. Not every assessment needs to be secured to the same level - we can also think about key assessment moments in our courses and programmes and deploy our resources to secure these assessments ().
Students are using AI in a range of different ways – we need to keep talking about what’s ok
Between content-generation, writing enhancements, collating, parsing and summarising information, and acting as a tutor for complex concepts, AI can assist students in a range of ways, and students are experimenting with what these tools can do. We need to have open dialogue with our students about what is expected and what is ok in each course and discipline (our student Guidelines can help with this). This also involves discussing the learning outcomes for your assessments, and why they are so important.
Case Study: Personal Professional Development (PPD), Business School
In PPD, Year 1 students form a development plan towards the UoA Graduate Attributes. Assignments include critical reflections with the addition of an oral component: a presentation of their learning journey and plan.
In a pilot approach, Tutor/Lecturer John Murphy uses current relating to key course concepts such as ethics and intercultural competency. He uses an iterative and actionable approach to feedback () and marking. Early in the semester, his 25 students write briefly on anything they found interesting or challenging in the workshop. This helps the tutor get to know the students and provides a (non-graded) writing sample for cross-checking later.
After an initial review of submitted assignments, students are asked to briefly respond to questions in the following workshop during a scheduled 15min writing time. Brief oral Q&As are used at times to help validate student submissions and achievement of outcomes. Where students are unable to briefly write or speak to their submission, an academic integrity investigation may be necessary – even where Turnitin and AI reports have not flagged anything unusual.
In future instances, students could be allowed to use AI to generate their visual presentations, offloading this effort to Artificial Intelligence tools, so they can focus on achieving learning outcomes.
Utilising vivas
Another approach is to embed a Viva into major assessment tasks. To assist in standardising the Q&A and securing validity, the questions could be based on the rubric criteria for the overall assessment, with each criterion stating that the student is able to confirm their understanding in the Viva. To prevent any increased workload, markers would skim the written component to prepare for the Q&A, and dedicate most of their marking time to the Viva.Â
Securing the assessment environment – educational technologies
Cadmus is a useful tool which makes the drafting process visible. Students write their assignments inside the LMS, and Cadmus highlights where blocks of text have been pasted, words typed and indicates the time taken to compete an assignment. video demonstrates the potential of this tool.
Another tool that can secure a face-to-face assessment environment is Respondus Lockdown Browser. When set up in advance, it prevents students from accessing external websites during a workshop activity such as completing a Multiple Choice or Short Answer Quiz in MyUni or on paper.
Contact LEI for support with capability building in the use of educational technologies to enhance teaching, learning and assessment.
Integrating AI with integrity
How can educators guide students to use AI in their assessments while preserving academic integrity and safeguarding student learning? Currently, there is no ‘perfect’ solution. Furthermore, there are almost certainly ways to work with AI in assessment that we have not even conceived of yet. For now, here are some possible approaches to using AI with integrity that have been trialled at the ×îÐÂÌÇÐÄVlog:
1. The integrative approach
Assignments are built around generative AI, requiring its usage in some capacity. Students might be instructed to critique, edit, or provide commentary on the output of AI. Students could also be asked to use AI in the drafting period.
Academic integrity can be preserved, as student usage of AI is transparent, acknowledged, and scaffolded. Students are evaluated on their relationship to the AI output and how effectively they are able to utilise, iterate upon, or evaluate it.
Pros:
- Strong at building certain student capabilities, such as critical evaluation of a source.
- Inherently builds digital literacies and AI literacies
- Focuses on learning processes, rather than a student’s ‘perfect output’.
- Anticipates the way students will likely use generative AI in their future working and personal lives.
Cons:
- May be weaker at building some student capabilities, such as creativity, imagination, and formulation of original ideas.
- There are some arguments that this approach reduces student agency and freedom of expression, such as in by education lecturer Adrian J. Wallbank.
2. The reflective/declarative approach
Students are permitted to use generative AI in certain limited ways, but they must cite, declare, and/or reflect upon their usage of AI. Students may be required to cite the output of AI in their references. They could be instructed to write a short reflection describing how they used AI or provide the text of the prompts they typed in. In this way, citing AI is akin to citing any other book, website, or even a lecturer.
Pros:
- In addition to the advantages of the integrative approach, requiring students to reflect on their usage of AI promotes metacognition and critical analysis.
Cons:
- It may be hard to enforce usage with integrity and transparency; there is a risk that students simply do not correctly declare their usage.
- Equity issues: not all students may have equal skills or access to AI. Scaffolding student AI usage with formative activities in class is an essential step in mitigating equity issues.
3. A permissive approach?
Warning: this one is experimental! Law professor (2023) proposes a possible model where students can use AI with no limitations. However, the baseline standard becomes much higher and there is a strict grading curve. The students' own authentic work is required, as they cannot pass if they simply use AI to write their assignment. The rubric and learning outcomes must be carefully designed to ensure this.
Pros:
- Using AI for cognitive offloading or as an ‘Extended Mind’ could allow for more focus on higher-order cognitive skills
Cons:
- Students will present AI output as their own work
- Certain capabilities, such as writing skills or formulating an argument from scratch, may be decreased
To decide which approach to take in designing your own assessments, consider the pros and cons of each approach in relation to the specific learning outcomes you wish to build and assess. For example, if writing skills are not relevant to your learning outcomes, you may allow students to use AI to generate the content of their assignment under a declarative model. However, if writing good prose is a desirable skill, asking your students to provide written commentary upon the output of AI could be more appropriate.
No matter the chosen approach, scaffolding the use of AI is crucial to ensure student equity. Ask yourself whether your students have the necessary skills and knowledge of AI to use it effectively. You should also consider whether students have the necessary subject knowledge, expertise, or skills to critique its output.
Article by Amy Milka, John Murphy, Tamika Glouftsis, Paul Moss