What if the next generation of teacher accountability systems simply relied upon assessment of student performances? You’re thinking: don’t we do that now? No, we don’t. In most cases, our current accountability systems of standardized tests are supposed to measure student learning, which is not the same as assessment. Attempting to measure learning often leads to limiting ourselves to finding the best statistical models, crafting the best distractors, and determining cut-off scores. We should instead focus on finding ways to figure out what is happening in the classroom and how those learning activities engage students in performances of science and engineering. Isn’t that what taxpayers and parents really want to know: what’s going on in there?
I’m increasingly convinced that it is possible to assess and share a student’s performances of science and engineering without having to put a measurement (number/score/value) on that student’s work. Its pretty simple, and even excellent educational practice, to tell a student how to fix their mistakes rather than simply writing “72%” at the top of their assignment. This kind of assessment without measurement should be happening routinely in classrooms. Its also entirely possible to have this kind of assessment mindset when observing teachers for accountability purposes. Collections of student work, as in a portfolio, could be analyzed and areas of strengths and weaknesses identified and shared with the teacher and, perhaps, the public.
Four years ago I started using digital portfolios to assess student learning as a way to hold myself and my students accountable to a set of science performance standards that I knew my students were not achieving. It is not an amazing stretch of the imagination to picture a system in which such portfolios of student work are examined by the representatives of a state Department of Education to assess how I’m performing as a teacher. Unfortunately, the recent tragic history of accountability practices nationwide would suggest that, at least politically speaking, if an assessment system doesn’t generate numerical measurements of students, no one wants to touch it.
But why does the idea that we can measure student learning burn so brightly in many Departments of Education?
To answer that, I think we have to look closely at what these so-called measurements of learning (state achievement tests) get us: they provide numbers that stand in for unquantifiable quantities, namely “knowledge” and “ability.” Some of the resulting numbers are bigger than others and thus provide a sense of easy comparisons between whatever the different numbers are attached to. Clearly, if I am buying a car, one that gets 40 mpg is superior to one that only gets 26 mpg. But is it fair or even appropriate to attach certain numbers to students, teachers, schools, school districts, or even states? What do numbers attached to a student even mean? Does a scale score of 400 on a state test mean that a student has learned less than one that earns 500?
Worse yet, what are those measurement comparisons used for? Lets examine my least-favorite use of educational measurement data: the real-estate market. We all know the real estate mantra: location, location, location. When you look for a new house these days you can quite easily access information about the quality of the neighborhood in which the house is located. Of course, school ratings are often thrust at potential buyers as a major indicator of the “right” neighborhood. Some of the newer realtor-oriented mobile apps sport new “schools” tabs that are clearly meant to add helpful data to your house-buying experience.
For science, let’s pretend to buy a home here in my town, La Junta, Colorado. In our case the community is composed of one neighborhood so all our school district data applies to the whole town. Here’s what we find out about my school district on some websites that you can easily find on your own (comments mine, but from a prospective buyer’s perspective):
Overall rating: 3 out of 10. Ouch. Better not buy a house here. These schools must suck.Wait a minute…this school district was a 3 out of 10. These ACT test scores are right near state average, so shouldn’t the district rating be near a 5 out of 10? Maybe there’s more to it.
Hmmm, on second thought, maybe I don’t want to move here after all. Maybe this educational environment deserves a 3 out of 10 if these are the kind of people my kid would go to school with. Why else would a realtor show me these numbers?
In reality, a combination of “educational environment” (whatever that means) and state testing scores (CSAP/TCAP) are what brings our magic number down to 3/10. Sure, the realtor sites add the caveat that we should check with the individual school districts to look at multiple measures of success, but as a simple, first look, a single measurement is sure easier to produce. And its misleading, wrong, and easily manipulated.
And that’s just how numbers are used in the real estate business. The business of education sometimes uses those numbers in far more harmful ways. Look at any recent headline with the words “standardized test” and you’ll probably see some of the fallout from decades of so-called measurement of learning.
I don’t have the magic bullet to fix the national obsession with comparing apples and oranges, but if I did, it would look a lot like a portfolio-based collection of student work that could demonstrate not only students’ effort and learning but also the care and planning that teachers invested to help create an environment in which their students can thrive. That’s the kind of accountability system that I can get behind.