Connection between curriculum & assessment design and academic integrity

Quite often university courses are comprised of lectures and tutorials. To determine if course learning outcomes have been achieved, assessment may include quizzes, a mid-semester test and a final exam. The number of quizzes and tasks weighting may vary but what stays the same is the fact that all course assessments are summative - theyÌýcount toward the final grade and are supposed to provide academics with an overview of a student’s overall learning/ achievements. In this blog we will explore why sometimes, despite our best intentions, such an assessment design can lead to a number of academic integrity issues.

First, the quizzes.

What’s the problem?

Imagine an assessment task that consists of one or more quizzes. Each quiz covers a number of topics, and each student taking a quiz gets a set of random questions from the question bank and has only one attempt to complete the task. Whilst randomisation of questions is an attempt to minimise academic integrity issues, it actually means that students sometimes are not exposed to/assessed on certain parts of the course content and, therefore, never get a chance to practice their knowledge in those areas. When a student does well in a quiz, they may get a false sense of security of being well-prepared for further assessments and ready to apply the knowledge in more practical settings, whereas in reality they may still have considerable knowledge gaps. In this scenario, when students get to the next assessment, they may realise that what they thought they knew is, in fact, insufficient. This could lead to panic, and as a result, to academic misconduct.

When educators look at high quiz pass rates without considering randomisation of questions, they too may fall into the trap of false security assuming that students are engaged and have developed a decent understanding of the topics covered. This may prevent educators from adjusting their teaching to cater to the existing learning gaps.

How to avoid the pitfall?

Ironically, one of the strategies to mitigate academic integrity issues and increase student engagement is to … implement practice non-assessed quizzes into the course. These quizzes added to each topic/lecture provide an opportunity for students to practice and check for learning of the wider knowledge domain. Even though the assessment quiz does not cover all parts of course content (as discussed above), students would have had exposure to most of those in practice quizzes. Therefore, when students get to the next assessment, they will not panic, and as a result, not be tempted to engage in academic misconduct.

To conclude, practice (formative) quizzes help students prepare for the summative assessment (assessed quiz), develop good study habits, boost their confidence, and, as a result, reduce academic misconduct.

Next, mid-term & final exams.

What’s the problem?

Imagine an assessment task that consists of a mid-term and a final exam. These types of assessments typically consist of short-answer questions that require application of knowledge. Usually we would prepare students for these in tutorials.

However, academic integrity can still become an issue when it comes to answering exam questions because students are lacking sufficient knowledge and scaffolding in how to apply lecture theory.

Sometimes we assume that students will work through all the concepts covered in the lecture and the new knowledge will then be automatically consolidated and applied in the tutorial – just like the puzzle pieces build into a solid picture…

Lecture to tutorial chart

… and that then this consolidated knowledge from lectures and tutorials is going to be applied in the mid-semester/final exam.

Tutorials to Mid-semester test chart

What often happens in reality, though, is this:

Unsuccessful Lecture to tutorial chart

When students arrive in the tutorial, they are not yet familiar with the new lecture material as there is often not enough time to process, connect and retrieve it. Understanding a new concept is a cognitively challenging process, and to process the new knowledge the brain searches for models/examples it can draw from. If it doesn’t find anything relevant, then it can become cognitively overloaded. If by the end of the tutorial a student is not able to apply the knowledge we expect them to, this could lead to panic when they get to the assessment. As a result of that panic, they end up in academic misconduct.

How to avoid the pitfall?

To bridge the gap between theory and application, and to scaffold and prepare students for their mid-term/final assessment, worked examples could be used.

A worked example is an explicitly explained exam-type short-answer question that follows the relevant topic input. Worked examples reduce learner’s cognitive load, enabling them to focus on understanding a process which leads to an answer, not the answer itself. Ideally, each lecture should contain completed examples related to the topic where each part of the example is deliberately and explicitly explained to students.

Also, it can be a good idea to include worked examples and practice questions as pre-tutorial tasks as they will help students secure the processes they need to deal with tutorial problems.

This way the learning process is chunked into separate stages, with every next stage naturally building on the previous one in order to allow the processing of each relevant part to be secure before the next one is introduced. Then the following stages of knowledge acquisition will be present when worked examples are included in a course:

Chart from lecture to Mid-semester test

Increased confidence in understanding and application of knowledge may lead to increased confidence in students when dealing with exam-type questions. Consequently, it will reduce the pressure of not being able to answer a question, and thus, there will be no reason to panic or temptation to engage in academic misconduct.

Ìý

Natalia Zarina, Learning Designer, and Paul Moss, Learning Design and Capability Manager, LEI

Tagged in Learning Enhancement & Innovation, learning design, Learning Systems & Innovation