Today was the beginning of MAP testing which is not really testing but since the MAP (Measures of Academic Progress®) is a replacement for the loathed WASL old names die hard.
So picture two dozen kindergarteners/1st graders seated at desks with laptops, mice, and headphones. Some of them had never sat in front of a laptop before. Imagine administering a self-guided assessment with students who had never seen a pencil or paper. Some teething troubles, as not all the stations were operational, missing headphones or a mouse, but easily fixed. Not so easily fixed was the design of the exercises. Given there were headphones, you know that questions and answers were read aloud to each student. Fine, though the language and terminology used by the assessment software is likely to differ from that used by the classroom teacher. Unlike a classroom teacher, a computerized test cannot alter the delivery of questions of answers to match the listener. So the assessment of the material may be skewed by how well the student can comprehend the language of the voice prompts and the written answer choices.
The MAP is intended to dynamically change the questions based on prior answers. So if a student presents strength in a given area, the questions will become increasingly more challenging, to assess their strength. Great idea, really. However, if the wording of the questions themselves is more challenging than the question’s content, how accurate can we expect the results to be?
Given the age of these students, some questions contained manipulable elements: put three apples on the plate, sort the ducks by length, for example. In the apple example, simply clicking on an apple made it appear on the plate. With the ducks, they had to be picked up and dragged into position with the mouse. Did I mention that not all of these children were experienced laptop pilots? In other questions, an equation would have a missing term and the student was to choose from a list. Rather than simply clicking on the desired element, they were expected to drag it into position. Some found the trackpad easier, or perhaps more fun, to use but the proctor/administrator wouldn’t let them: why not?
Most frustrating and possibly what makes the MAP design suspect is that there was no way for a student to indicate they didn’t know or were unfamiliar with a concept. They had to choose an answer to get to the next question. The questions I got indicated some of them understood the concept being assessed but had no way to choose from the bewildering answers. Again, how accurate can the results be?
I suppose some will argue that the fix for inadequate technnology is more technology: perhaps a touchscreen device like the iPad would work better. Color me skeptical. Teaching and assessing progress is a human activity that might be aided with computational power but I favor a high-touch over a high-tech approach. When adaptive technology can find ways to assess students with something other than a one size fits all approach, we might be getting somewhere.