Skip to Main Content

Course Design Blueprint

Transparent assessment processes

It is important that students are able to understand how their assessment is carried out: how and when they will know what it is they need to do, the processes to allow them to submit work, how their work is marked, and how they will receive feedback, and how this is intended to support their further development and achievement.  The University's policies and procedures set out in detail how it expects assessment to be carried out, ensuring a consistent process is applied across all provision. 

To ensure transparency and fairness of assessment designs and communication, course teams are expected to:

  • Explain to students how their work will be marked, moderated, and subject to external examiner review.
  • Make clear how TurnItIn is employed in the assessment process, and allow students to make use of the tool to enhance their understanding and to enhance their practice.
  • Ensure all assessment criteria are clearly explained to students, with their application illustrated to enable students to self-assess their work before submission.
  • Provide guidance on how word limits for assessments will be applied (see course team guidance box below).
  • Provide clear assessment schedules at the start of each academic year indicating all assessment deadlines.

However, course teams will determine many aspects of how their course's assessments will be delivered to students:

The University has adopted a generic set of marking criteria as available in the University's Course Handbook template (available on the course approval pages in the University's Quality Manual) to assure that comparable standards are employed for all courses. However, there is a degree of flexibility in how individual courses employ these criteria:
  • Some courses use the generic criteria directly, employing them explicitly in the feedback that students receive for each assessment component.
  • Other courses have adapted the criteria to better suit their particular subject areas and the nature of their students' assessments, and use these adapted criteria within student feedback to explain marking decisions.
  • A few courses create separate sets of criteria for each module or component, each aligned with the generic criteria, and provide these in assessment briefs and use them within feedback.
The University's Assessment of Group Work Policy provides a framework for group assessment, providing a degree of freedom for course and module teams in how they implement group based assessments. Teams will need to ensure their approaches are made clear to students from the outset, making particularly clear how any final marks will be apportioned.
In some assessments students may be permitted or required to select case studies or subjects for their work, agreeing these with their tutors. In such situations, the mechanisms for agreement should be made explicit in the assessment brief, as should any criteria that the tutor will employ to confirm suitability of a student's proposal. Similarly, some assessments may afford students the freedom to negotiate the format which their submission might take, and the mechanisms and criteria employed should be made clear from the outset. Teams adopting such assessment flexibility should seek to adopt a consistent approach to granting approval for students' proposals across their provision where possible.
Setting a word or time count or limit for an assessment component implies that there will be some sort of penalty for not adhering to this. Penalties within marking processes for work which exceeds or fails to reach set word counts should be clearly communicated to students in course documentation and within all assessment briefs where such penalties are to be applied. University of Suffolk course teams have adopted a variety of approaches to the setting of penalties, as deemed appropriate to the particular subject area(s) involved. Some examples include:
  • allowing a 10% variance in submission lengths, and referring any work that fails to lie within these constraints.
  • allowing a particular variance, and then applying a scaled penalty of marks according to how far beyond the limits the work is (i.e. for work exceeding 10% of the word limit, penalising by the percentage that the work exceeds the limit).
  • adopting a standard approach such as one of the above, but for specific assignments being stricter where working to a particular guided length is seen as related to one of the assessed learning outcomes. For example, where a student is required to produce an article for a publication where a strict word count is in place, adhering to that word count could be seen as a requirement of the assessment.
Most courses include a capstone project at the culmination of their course, often in the form of a project or a dissertation. Usually allocated a higher credit weighting than other modules, this module will be a key focus for students and can be a significant determinant of a student’s final classification. Consequently, it is important that the assessment processes for this module are robust and clear for all involved. It can be difficult for those who supervise capstone projects, gaining through the supervision process both a deeper relationship with the student and a clear understanding of the student’s achievement and, in some cases, failures, to take an unbiased view of the final submission. To avoid the potential of such unintentional bias, it is prudent to avoid using the supervisor as the first marker for the student’s submission, although it may be worth them taking the role of moderator for the work.

What is a rubric?

Most educators would agree on the idea of a rubric being something along the lines of "a type of matrix that provides scaled levels or achievement or understanding for a set of criteria or dimensions of quality for a given type of performance." (Allen & Tanner, 2006)

 

How do rubrics benefit academic staff?

There are several benefits of using rubrics for instructors. The overall idea of using them is that they should make assessment quicker, more accurate, and more transparent, as well as promoting better progress from the learner.
  • You can provide much more detailed feedback in less time than simply using annotations and text feedback
  • Students being more aware of the assessment criteria upfront should result in better grades for your cohorts
  • The ability to analyse patterns of success or difficulty against specific criteria
  • Being more able to moderate and standardise feedback from different instructors
In addition, using rubrics in Brightspace has particular benefits:
  • The ability to re-use rubrics that have been created previously
  • The ability to share rubrics across a whole faculty, school, or even the university
  • The ease of sharing the rubrics with students before they submit
  • The speed with which you can highlight which level of each criteria applies for a piece of work
  • The ability to tailor the rubric to meet the exact scoring requirements of your assessment

How do rubrics benefit the student?

From the perspective of the student, rubrics have a number of advantages. Rubrics can:
  • Make expectations clear
  • Provide transparency in the assessment process
  • Help with self-assessment
  • Help with peer-assessment
  • Can be used to structure feedback

Rubrics in Brightspace

  • Holistic Rubrics - Single criterion rubrics (one-dimensional) used to assess participants' overall achievement on an activity or item based on predefined achievement levels. Holistic rubrics may use a percentage or text only scoring method.
  • Analytic Rubrics - Two-dimensional rubrics with levels of achievement as columns and assessment criteria as rows. Allows you to assess participants' achievements based on multiple criteria using a single rubric. You can assign different weights (value) to different criteria and include an overall achievement by totaling the criteria. With analytic rubrics, levels of achievement display in columns and your assessment criteria display in rows. Analytic rubrics may use a points, custom points, or text only scoring method. Points and custom points analytic rubrics may use both text and points to assess performance; with custom points, each criterion may be worth a different number of points. For both points and custom points, an Overall Score is provided based on the total number of points achieved. The Overall Score determines if learners meet the criteria determined by instructors. You can manually override the Total and the Overall Score of the rubric.
Guidance on building rubrics in Brightspace is available to UOS staff.