Multiple Choice Tests Thinking Handcuffs

 

Embed or link this publication

Popular Pages


p. 1

june 2009 multiple choice exams thinking handcuffs by joe bower anyone can confirm how little the grading that results from examinations corresponds to the final useful work of people in real life ~jean piaget its final exam time at my school and my teacher colleagues are collectively herding to the multiple-choice test scoring machine for just under $800 cad our scoring machine can scan up to 35 sheets per minute grade up to 100 questions per pass score exams with up to 200 questions pc compatibility for advanced data collection and analysis the front of the instruction manual proudly reads grading your tests just got easier after watching scoring sheet after scoring sheet scurried through the scoring machine i can personally attest to how easy this really is its no secret why multiple choice exams are so popular among teachers ­ their utility is second to none but what are the cons to multiple choice tests here are a few items to think about before giving your next multiple choice test ambiguity misinterpreting a question can result in an incorrect response even if the response is valid a free response test allows the test taker to make an argument for their viewpoint and potentially receive credit depending on the number of possible answers that are provided a test taker could have a chance of completely guessing the correct answer it is conceivable for a student to select the wrong answer for the right reasons or to select the right answer for the wrong reasons the results of such a multiple-choice exam are surrounded with uncertainty and doubt no partial credit even if a student has some knowledge of a question they receive no credit for knowing that information if they select the wrong answer free response questions may allow a test taker to demonstrate their understanding of the subject and receive partial credit.

[close]

p. 2

even carefully constructed exams that reflect very detailed curriculums can be used to improperly assess students if an exam was created to carefully reflect a certain curriculum you might see only one question that covers a specific outcome what if that student did in fact understand that outcome but for any number of reasons they get the question wrong that means that this test would report that that student understood nothing of that concept ­ which most likely would be wholly misleading and untrue how often can a teacher honestly report that a student understands nothing overemphasis on timeliness a premium is placed on speed at the cost of creativity and thoroughness this overemphasis on timeliness also contributes greatly to the ambiguity of the exam most test-takers are taught to madly fill in the remaining answers before having their exam taken away by the exam supervisors there is no way to differentiate between these random guess responses and the responses that were carefully and thoughtfully selected recognizing guessing as a problem some test creators enact a penalty such as deducting a mark for incorrect answers ­ the hope being that test takers will not guess and instead leave the question blank this solution may stop the guessing but it still does not address the ambiguity as all those unanswered questions will simply show that the test taker got them all wrong ­ when in truth the test taker may have had some level of understanding but because they couldnt get finished in time they receive no credit subjectivity how is the length of the exam decided how many questions are necessary to show enough understanding in the case of reading comprehension exams how many reading selections will there be and what is an appropriate length how many answers will there be to select from which outcomes will be tested which will be excluded which will be more heavily weighted the point here is not to try and figure out the answer to these questions but that there is no one answer for these questions and yet the choices made by the test taker can have an immeasurable effect on the tests results behaviouristic in nature these tests only care about whether the student got the right answer they cant measure whether the student has a true understanding for the content even in a subject such as math that can be mislabelled as very black and white or right or wrong it should very much matter how a student comes to answer the question 2+2=4 did that student simply memorize his cue cards or does he actually understand the addition process a multiple choice test does not and cannot concern itself with understanding such valuable information poor testing can lead to poor teaching some teachers may use multiple choice exams voluntarily while others may find their use compulsory either way teachers may feel pressure to achieve high scores on these

[close]

p. 3

tests and that kind of pressure can lead to poor teaching such as the use of lecturing on the behalf of the teacher and memorization on the behalf of the student take math for example many teachers may teach tricks or shortcuts such as when dividing two fractions simply flip the second fraction and multiply a student could mindlessly comply and perform quite well by choosing the correct multiple choice answer in cases like this a poor assessment tool has lead to a poor teaching technique one that relies on mindless compliance and memorization rather than true understanding however if we use the test scores as an indicator for learning that teacher and student appear successful interrater reliability multiple choice exams are created with one right answer in mind for each question this straightforward scoring system is used so that any two raters will always agree upon how well a student did this need for agreement also known as interrator reliability by statisticians is gained at an alarming price authenticity is sacrificed for perceived reliability testing test-taking skills multiple choice exams require a certain amount of test taking skills and some students have better test taking skills than others many teachers will actually teach students strategies for writing multiple choice exams for example some test takers understand that an answer that has the words always or never is usually not the correct answer because rarely is something ever always or never this is considered a fairly good strategy and students who are aware of it may have a better chance of doing well however there are some test takers who have come to believe in poor strategies for example some students believe the pattern of responses matters and so they say to themselves this cant be another b answer as we have just had three in a row or they believe in myths such as when in doubt pick c granted we can all probably agree this is a silly strategy but what if students actually use this strategy the format of the exam has skewed the measurement of that students learning averaging averages traditional practice encourages test raters to not only mark each question right or wrong but to also tally up the number of correct responses and compare that to the total number of questions ­ of course we know this to be the average or mean however what does this number actually tell us lets pretend there are three questions on the test for every outcome we taught you could then look at the data and see how many of those three questions a specific student got right or wrong lets say for those three questions a student got 1 out 3 correct but for another three questions that tested a different outcome the student got 2 of 3 correct separately he understood 33 of the first outcome and 66 of the second outcome however when you average these averages he gets 3/6 which comes to a mark of 50 what do these numbers mean anymore imagine how diluted the average has become when you have 50 to 100 questions that may be measuring the same number of different outcomes and yet these grades importance is

[close]

p. 4

elevated to grand heights note that the problem of averaging averages is not exclusive to multiple choice exams collaboration cheating ask any parent for a list of characteristics they wish their children to develop as they grow into adults and there is a very good chance that collaborative skills are somewhere on that list when you think back to your schooling how often were you permitted to collaborate with others during examination if you did try to collaborate we all know what that was called ­ cheating and you got in trouble for it unfortunately there may some progressive classrooms out there but it would be a very safe bet to make that most classrooms still have students sitting and writing their exams in isolation regardless of your job or profession how often are you told to figure something out in total and complete isolation ­ no books no help no talking in the real world there simply arent that many times you are expected to solve a problem or perform a task in complete and total isolation ­ and even if you were it would be awfully archaic to refuse you the opportunity to reach out for the help you needed to get the task done again note that the lack of collaboration during exams is not exclusive to multiple choice exams thinking cap the very nature of multiple choice tests slaps students with a pair of thinking handcuffs who does the majority of the thinking on a multiple choice exam who asks all the questions who proposes all the answers thinking is messy learning is messy but multiple choice tests conveniently remove the mess all students are required to do is circle or fill in a dot if we were truly interested in measuring student learning shouldnt we encourage the students to show us as much of their thinking as possible reducing something as beautiful as learning to a digit or letter is an exercise in needless oversimplification differentiated instruction and undifferentiated assessment many teachers today would readily admit that all learners learn differently and it is the teachers responsibility to address these different learning styles with differentiated instruction however many teachers still use multiple choice tests in an attempt to

[close]

p. 5

measure their students learning there is a real disconnect between our understanding of differentiated instruction and our attempts to measure learning with our undifferentiated standardized assessment tools value what we measure vs measure what we value it is true that it makes good sense to occasionally stop and reflect upon how well we are doing something ­ the rest of the time we should concern ourselves with the actual doing of whatever it is we have set out to do a short anecdote may enlighten this point a man was seen on his hands and knees searching underneath a street light it was late at night and very dark when a passerby inquired what the man was doing the man said that he was looking for his lost keys the passerby then noted that the man was fortunate that he had lost his keys under the street light the man quickly replied that he actually lost his keys a distance to the north but it was too dark over there and so he wanted to search where it was easy to see there is a big difference between measuring what is simply easily measurable and measuring what we actually consider important multiple choice tests measure a very limited and narrow kind of learning if a great amount of importance is placed on these kinds of tests people will come to see these limited and narrow kinds of learning as most important ­ sacrificing their pursuit of other valuable kinds of learning that are rarely measured on multiple choice exams show me the multiple choice test that can assess things like sense of humor morality creativity ingenuity motivation empathy despite all these reasons for abandoning the use of multiple choice tests their utility seems to trump these overwhelming cons whats even more discouraging is that many teachers still chose to use multiple choice exams despite having a plethora of more authentic assessment alternatives such as performance assessments portfolios written response and personal two way communications teachers who continue to use multiple choice exams as their primary or default assessment tool are engaging in a kind of educational malpractice because they are reporting on their students learning in a way that may range from being marginally inaccurate to wholly untruthful.

[close]

Comments

no comments yet