Think of something new and innovative that you are trying out in your classroom, school, or district.
Prove to me that it works.
Yep, I want you to stop reading this and think about some fancy new way that you have of educating and/or assessing students and tell me what evidence you have to prove that your new technique works.
Twice recently I’ve been faced with this demand. In the first instance, a teacher who was very excited about using portfolios after hearing my talk at NSTA15 in Chicago contacted me for help in convincing her science department to let her pilot the use of portfolios. She sent me a list of their questions that looked something like this:
1) Have you seen an increase/decrease on standardized test scores?
2) Have you seen an increase/decrease in student motivation?
3) Have you seen an increase/decrease in student competency?
A similar question popped up in the application packet for the PAEMST:
Provide evidence of your teaching effectiveness as measured by student achievement on school, district or state assessments, or other external indicators of student learning or achievement.
Here’s the problem: portfolio-based assessments like those that I employ are meant to be a replacement for standardized test scores. Portfolios are not just some labor-intensive test prep system. That would be like spending months training for a triathlon but instead finding yourself riding a mechanical bull for ten minutes. You could probably ride the bull a little better than if you hadn’t trained, but the bulk of your training would be lost on anyone watching you ride the mechanical bull (badly).
What then do you say to the science department questionnaire about the effectiveness of portfolios? What proof could I possibly provide about external indicators of student learning that could match the depth and quality of the portfolio assessments themselves? ACT data might be the closest thing to useful testing data that I see, but correlating achievement on ACT with pre- and post-portfolio implementation would be fraught with any number of the usual data snarls that we find when trying to compare different test takers from multiple school years.
We are then at an impasse. Those educators like myself that want to use portfolios for assessment will tout all the amazing things that you can observe in portfolios that you could not otherwise. Those who want to keep using standardized tests as the measuring stick for student and educator performances will decry the lack of a link between portfolios and achievement test scores.
I think that pretty soon we are going to have two different systems pop up across the country to accommodate these two assessment camps. One wing will be led by the testing juggernaut that stands to make a lot of money by continuing the current testing regime, but the other will be led by…..Kentucky? New Hampshire? Your guess is as good as mine, but I suspect (hope?) that sooner or later we’ll see some states piloting portfolios (again) as much needed replacements for the broken assessments that we currently use.
In the meantime, I hope that teachers like the one I mention above are allowed or even encouraged to try new ways of teaching and learning and that the burden of proof of effectiveness does not grind progress to a halt. New assessment systems require new systems of measurement. To expect more comprehensive forms of assessment such as portfolios to generate the same simple, supposedly comparable data as has been generated in the past is blatantly unfair to those willing to try something new.