The support tabs below will guide you through how to create and manage a quiz.
Quizzes can be a useful tool to quickly gauge a cohort's knowledge of a given subject, whether that's diagnostically giving them the quiz before any knowledge or after/during the subject.
Summative and TCAs
Quizzes are a very useful tool if you are looking to create a TCA in place of an exam. You are able to create a 24 hour window with the 4 hour TCA able to be completed at any time within that window. If you are creating a quiz for a summative component of your module or would like support in doing so, please contact your course administrator (if you have one). They will ensure that settings for grade items and special access are correctly weighted and configured.
Below you will find information on: Creating a quiz, Setting the availability, choosing the right assessment settings and deciding what a student sees when they submit.
This tutorial assumes that you have already set up your questions and the actual content of the quiz – see this tutorial if you have not.
There is also a separate tutorial for how you set the availability and special access for the quiz.
Next, you will probably need to set up what the student sees when they submit their assignment. Move to the tutorial for this to continue.
This tutorial assumes that you have already set up your questions and the actual content of the quiz – see this tutorial if you have not.
There is also a separate tutorial for how you set the availability and special access for the quiz.
For information on setting up automatic grading and the number of attempts allowed, see this tutorial.
Once you’re happy with all of those, look at these steps to determine what a student sees when they submit a quiz.
If you have worked through all of the guides mentioned at the start of this set of instructions, then you are probably ready to publish the quiz. If that’s the case, save and exit and all of the information you have input will take effect. If you set an active date, then the quiz will automatically publish on that date.
Once you're ready to start building your quiz, you will need to think about which question types you need. Below you'll find information about your options. The questions are listed in a slightly different order on Brightspace, but below I have grouped them based on how they behave and the implications for how easily they can be graded automatically.
With the following question types, Brightspace can easily grade automatically and, if the answers were set up correctly, the student should easily get the mark for the correct answer. Click the name of the question to learn more about it.
This question type is very straightforward. You can still add a hint, though this will only display if hints have been allowed for the quiz.
If you wanted a question with answers along the lines of ‘True / False / Impossible to tell’, you would need to use a multiple choice question instead.
This is for when you have several choices, but only one is correct. Handily, you can randomise the answers for each student. This makes it harder for people to ask each other for answers if you’ve used enumeration (i.e. “a/b/c”). It also makes it slightly harder for student to breeze through if they redo the quiz.
If you want students to be able to select more than one correct answer, you will need the ‘multi select’ question type.
This is where it begins to get a bit more complicated. At least one of the options must be correct for this question type to automatically grade, but any of the options can be correct ones. I.e. students may need to select the three correct answers from five possible options in order to get the mark.
Again, the answers can be randomised.
The grading of the question contains a bit more nuance because you can choose what happens if they get the question partly right.
You have three options:
As ever, you can give feedback for each option selected, and/or overall feedback for the question.
This type of question is excellent for things like matching terms with their definitions. It’s worth noting that you can set it up so that more than one choice result in the same match.
That’s to say, it does not have to be a simple 1:1 correspondence between a ‘choice’ and its match. It could be used as a categorising question, whereby 10 different terms have to be put into one of three categories.
The choices will appear on the right hand side, in order. The matches will appear to the left, shuffled, with a dropdown selection next to each match so that you can choose which of the matches it corresponds with.
Once again, you have options for how it should be graded. Your three options are:
How do these options work out in practice? Let’s assume that there is a question that asks students to match eight capital cities to their countries. We have set the question to be worth eight points, and they are correct with six of their matches. What would they score under each of the grading options?
An ordering question allows you to put in a number of words, phrases or paragraphs which the student must then place in the correct order. The options will appear randomised when a student takes the test.
Ordering questions can have a hint, as well as feedback for each of the items that needed to be ordered. As normal, you can also give overall question feedback.
The scoring options are the same as with matching questions:
It’s worth bearing in mind that with an ordering question, getting one in the wrong place makes it likely that multiple others are in the wrong place. You will want to consider this in both the points that you attribute to the question, the difficulty level (if you’re using these) and the scoring method.
The following question types can be graded automatically, but will be dependent on you making allowances for possible errors that students might make, such as typos.
For example, if the correct answer is “stalactites”, but the student writes “stalagtites”, they will not get the mark unless you’ve told Brightspace also to accept “stalagtites”. You can pre-empt errors that you want to allow by inputting as many possible correct answers as you want, but unless the student types one of those exactly, they won’t get the mark.
You can tell it not to be case-sensitive, though.
Again, click the name of the question to see an explanation.
Short answer questions work for questions where a student will need to type a very brief (generally no more than two words) answer. Whether they get the points or not depends on whether you’ve anticipated what their correct answer might look like. For instance, if the answer is ‘cheddar’, a student would not automatically get the point if they wrote ‘cheddar cheese’. In an example like this, you’d want both ‘cheddar’ and ‘cheddar cheese’ to be considered correct. You’d also need to consider whether spelling mistakes, like ‘chedder’, should be accepted.
You can decide if their answer needs to be case sensitive or not. The answer could also be a regular expression, if needed.
Because of these considerations, you should consider whether you definitely want a short answer question, or you’d rather that they chose the correct answer from suggestions using a multiple choice question.
The more possible answers a student could give, the less advisable it is to use a short answer question. You’d have to anticipate a much wider range of potentially correct answers.
If you do opt for short answer questions, you should check the student responses once they’ve completed the quizzes. If they put any answers in which were deemed incorrect and you feel that they should have been accepted, you should add their answer to the accepted answers.
You are able to subsequently have any attempts regraded to reflect this.
If you have chosen more than one blank, you will have an option to decide whether they receive a share of the points for each correct answer, or whether it’s all or nothing. If you’re doing this, you might want to consider the ‘multi short answer’ question, which is more geared up towards these.
This works very similarly to a short answer question, but you can specify how many answers the student should give. As mentioned before, you will need to anticipate all responses that should be interpreted as correct for each answer.
Imagine, for example, the question “what are the toppings on a Hawaiian pizza”? Broadly, the answers would be ‘tomato’, ‘ham’, ‘cheese’ and ‘pineapple’. However, you’d need to provide for the students answering with things like ‘tomatoes’ and ‘mozzarella’. You may also need to account for typos such as ‘mozarella’ or ‘pine apple’.
This is why it’s important to consider which question type will work best in practice, and to review the answers (at least to begin with) so that you can be sure that the auto-grading is behaving as you need it to.
This type of question works identically to ‘short answer’ and ‘multi short answer’ from a technical point of view. The only difference is that you can create blanks which appear within a sentence or paragraph that you type.
The following question type cannot be graded automatically. You’ll have to mark this one yourself, so students wouldn’t get their full quiz mark until you’ve graded this question and published the mark.
Click on the name of the question for more information.
A written response question is the only question which cannot, under any circumstances, be graded automatically. This type of question is intended for longer responses which a lecturer or examiner would have to read before allocating marks.
If a written response question is used for a summative exam – i.e. a ‘grade item’ exists for it in Brightspace and it pulls through to the gradebook – you would need to mark this question before releasing the full grade and feedback for the quiz.
It is possible, if you want, for students to see the results for the questions on which they were automatically graded before you mark the written response questions.
You have a few useful options when creating this kind of question. As usual, you can create a hint. You can also input some initial text that the students will see as a sentence starter. You can create an answer key visible to the marker, which well help them evaluate the response.
If you enable the HTML editor, students could respond with more media, such as audio or video.
Finally, the ‘Custom response box size’ gives the student a suggestion of the length of response that you’re looking for (though you will want to specify this in your question).
Finally, two of the question types relate to more advanced options for mathematical questions. You can get a degree of random question generation based on a formula, and set acceptable margins of error. Click on the name of the question for more information:
This type of question is aimed for assessing mathematical knowledge, but allows you to generate questions based on a formula so that numbers differ each time someone sits a test.
D2L explain this in more detail here.
On a basic level, let’s assume that we want to test addition up to 10.
The question is written as: What is {x} + {y}? – This means that, when Brightspace runs the quiz, it will generate values for x and y based on the parameters that we set below.
We input the formula as {x} + {y} – This means that Brightspace will get the correct answer by adding x and y, and will grade the student’s answer accordingly.
Under variables, we set ‘x’ to have a minimum value of 0 and a maximum value of 5, with 0 decimal places (i.e. it will be a whole number).
We also set ‘y’ with a minimum of 0 and a maximum of 5, also with 0 decimal places.
This means that, when Brightspace runs the question in the quiz, the student will be assessed on anything from 0+0, all the way up to 5+5, with all possible variations in between, such as 1 + 4, 2 + 5 or 4 + 3.
That example is the simplest possible example, but there are many more options. The formula can encompass all of the mathematical operations, as well as powers, square roots, sines, cosines and more.
You can also allow a margin of error, either in absolute terms or as a percentage.
It’s also possible to input the units that the student should be using, e.g. ‘litres’ or ‘l’. If you do this, you can then specify what percentage of the mark should be lost if the number is correct but the units are wrong.
If you are expecting students to show their working out, it is better to provide a written response question. You could even do this immediately below the arithmetic question to combine the two.
The Significant Figures question type works along similar lines to the arithmetic question type, but is geared up towards assessing whether students have used the right amount of significant figures and have calculated and rounded correctly.