Close up of a students hand, holding a crip sheet, and a test.

Photo by Hariadhi.

The revelation on Friday that Pearson is monitoring social media for so-called security breaches of its PARCC tests has created a firestorm on social media over the weekend.

Some people are freaked out by the creepiness of the idea that Pearson is watching social media. That, to me, is not the particularly galling aspect of the story. It’s a good reminder that social media is public. You shouldn’t say something “inappropriate” on social media unless you expect it to become public. [Claims of privacy violations are serious but seemingly premature, as we don’t know the student’s age, a child under the age of 13 would be in violation of Twitter’s ToS anyway, and there’s no mention of whether the child’s account was protected or public.]

What is particularly galling and what we should be infuriated about is the fact that we have invested so much time, energy, and money into a test that comes with a gag order attached. It is galling that it is “inappropriate” to talk about the test – whether you’re a student or a teacher. We should instead be focusing our time, energy, and money on devising assessments that aren’t subject to such fragility. For more on that topic, I would suggest you read the Jersey Jazzman’s recent post, “When Pearson Monitors Students, They Prove the Inferiority of Their Product.”

It is in this context this morning that I read through part of the AERA’s new edition of Educational ResearcherThe issue focuses on Value-Added Measurement and one article, “Using Student Test Scores to Measure Teacher Performance: Some Problems in the Design and Implementation of Evaluation Systems,” touches on this issue of “cheating.”

One of the four “problems” discussed with regards to VAM is titled, “Teachers Monitoring Their Own Students During the Exam.” Essentially, the authors suggest (and substantiate with some data) that when you attach high stakes to an exam and ask teachers to proctor their own students, there is an incentive to help those students perform better. That incentive can, at times, lead to crossing the line into the territory of “cheating.”

That’s reasonable. There is evidence of real cheating by teachers, and I would contend that a simple solution for that is to simply remove the high stakes from these tests. That would remove the perverse incentives that drive these actions.

But in describing the examples of cheating, the authors state the following:

Coaching can take such subtle forms that students, and perhaps even teachers, are not aware that they have overstepped a line. Teachers circulating throughout the room can coach students without saying a word; they have only to read answer sheets and point to questions that students have missed, a practice particularly likely to be effective if the student knows the right answer and has missed the question due to a careless error.

Wait. Read that again. That’s cheating?!?

And this highlights the absurdity of standardized tests and their use in measuring “student achievement.” If we’re interested in measuring what students know and what they can do, why are we concerned with their proclivity for accidentally skipping questions? If you want to get a valid snapshot of what students can do, you would want teachers to encourage students to answer every question. After all, getting a question wrong because you accidentally skipped it (perhaps you intended to go back and do it later if you had time) is very different from getting it wrong because you have no idea what to do (and likewise very different from getting it wrong because you made a minor error in calculation or misread the question).

Circling back to PARCC, this problem is exacerbated by the new, computer based format of the exam. The instructions for test administrators are clear – we are not allowed to help students with problems that they have navigating the test interface.

And trust me, students have problems. I’ve seen the problems with adults at “Take the PARCC” events involving the practice exams, and I’ve seen the problems with students while administering the actual exam. I won’t provide details – lest “Big Brother” Pearson complains that I’ve violated test security – but I don’t think it’s any stretch of the imagination.

At the end of the day, that’s just one more example of why these tests are of questionable validity. There are very real obstacles that can and will affect student performance that have nothing to do with their mastery of the Common Core State Standards. Simply put, the test is not accurately measuring what it purports to measure… and it is therefore ridiculous to use these assessments for making high stakes decisions about students, teachers, or schools.