Although I haven’t yet given it a catchy name like my previous Binary Grading grading system, my new standards-based system of assessment and reporting is working well. We are midway through the second quarter of school and I have enough experience with the system to step back and make a few observations about it. As with everything involving high school students, these observations could change tomorrow, but here’s what jumps out at me so far:
Volatility of grades: Students and I were surprised at some of the major grade swings that are possible in an SBAR system. I’ve had a few swing wildly between B’s and D’s and back, which usually doesn’t happen under a point-hoarding system in which assignments contribute to an average value that is hard to swing once enough points are built up. In my system, though, the nine major standards are reported independently of each other and all count so that poor performance in one can negate good performance in another. I like it, though, because it keeps kids on their toes. Some had begun to be complacent about their grades but a few forced reassessments woke them up to the reality that they may be called upon to continue to demonstrate mastery of each standard.
The role of the course content standard (Standard 1): When I was choosing my standards for this year’s pilot SBAR project, I chose to have 9 standards that were identical for each of my 4 preps because there are some skills that I wanted all my students to learn and demonstrate in every science class that they take. The only major difference between the biology, anatomy, chemistry, and AP Biology standards is in Standard 1, which is subdivided into specific topic areas unique to each course. The intent was to 1) make a system that didn’t drive my students and I bonkers with 4 separate sets of standards and 2) deemphasize the content-related grade in favor of the skill-related grade. It is working quite nicely, in my opinion. Skills like analyzing research articles, experimental design, and interpreting experimental data are much more important in determining the overall grade than whether a student knows the difference between osmosis and diffusion. I’m happy with that.
The role of the 8 skill-related standards: The skill-based standards were really written for me, and not the students. I recognized some deficiencies in my instruction and basically tried to force myself to make changes by creating a grading system that demands that I give students the opportunity to assess skills as well as content. So far I am doing okay with this, but I am still more content-driven than I would like. More student-designed labs are needed in most of my classes, for example.
Death of death by testing: My tests and quizzes can be tough, given the subject matter I teach and I often see low percentage scores on some of the harder topics’ assessments. Regardless of whose fault it is, in a points system a low test score needs to be “fixed” by curving, throwing it out, or by some other fudging method so that the kid’s grade isn’t completely hosed. I used to curve or tweak point values so that some tests were not worth as many points, but that always bugged me, especially when I thought that I’d done a fine job teaching that particular topic. Now though, my tests and quizzes are just additional pieces of evidence to add to the mix. I integrate percentage scores from content-specific tests into the 4 point scale in a way that rewards the high achievers but doesn’t completely destroy the low-scoring kids. Its has worked well for me to have kids who score in the 90-100% range get 4’s, 80-90% get 3.5’s, 70-80% get 3’s, 60-70% get 2.5’s, 50-60% get 2’s, and below 50% get 1.5’s. I’m pretty satisfied with this part of the system as well, since the only students who are really nailed by tests are those who don’t show up to take them.
GoogleDocs rock the SBAR: All my record keeping is Google-ified. Student blogs are collected into my Reader, in which they are organized by class period. Evaluation of their blogs and other assessments is recorded in their own private Google spreadsheet with conditional formatting to show 4’s (blue), 3’s (green), 2’s (orange), and 1’s (red). Loving it! Its truly the best part of the whole grade system switch. The spreadsheet is shared with the student (view only, of course) and with parents as needed. I leave comments along with each assessment so that students have some guidance should they choose to reassess a particular standard. Sure it was a pain to set up over a hundred spreadsheets at the beginning of the year, but its paid off.
The gradebook shows what they know: Yeah, that was kind of the whole point. But it works. Students and I can glance at their Standard 1 sheet and point out which content areas they struggle with. We can look at their main gradesheet and point out which skills they need to spend more time on. As a communication tool, the SBAR gradesheet is vastly superior to the school’s online gradebook. Even though I report similar numerical data in both places, the constraints of the online gradebook and the freedom of expression (color coding, written feedback, smileys ; ) in the Google gradesheet combine to make the gradesheet much more useful and fun to use.
Future tweaks: I need to look closely at how the system is being implemented in the 4 different preps and make sure I’m providing enough chances at assessments for the different standards. Looking at student gradesheets is really assessing myself in a lot of ways because if there are major gaps in the standards that are being addressed, that’s really my problem, not theirs. Biology is working wonderfully, as is Anatomy, probably because I spend the most mental energy trying to reform those classes. Chemistry has a lot fewer assessments than I would like to see in the gradesheets and hasn’t met as many different standards as I think they should. Mostly we’ve hammered the lab principles and procedures standard (Standard 3) really well since we do a lot of textbook labs in chem. AP Biology is another beast altogether, because in some ways I think that everything we do in that class is formative assessment and rarely finds its way into the gradesheet. My tendency with AP Bio is to use a lot of informal assessment (discussions with students) so we don’t stop and take quizzes and tests very often. This makes for a very empty gradesheet, and I’m not sure whether that is a bad thing or not. In a sense, the real summative assessment for that class doesn’t happen until May when the AP Exam rolls around. Also, having only 3 students in that class this year lends itself to a lot of one-on-one discussion, so this may not be the best year to judge the implementation of SBAR in that class.
What really hasn’t worked: I’m not happy with the way that the midterm exam results are reported to students. All my classes took midterm exams right after 1st quarter as summative assessments of their learning to that point. Their scores do not show up in their color-coded gradesheets since they are not part of the standards-based grade but instead only show up in the school’s online gradebook in the semester test slot. That’s the only way I found to report the grade, but I have a very strong sense that students don’t really understand the role of that midterm grade because it is buried in a part of their online gradebook that they don’t usually look at until after they’ve taken semester final exams. I’ve got a bad feeling that students won’t truly realize that the midterm has the weight it does (7.5% of their semester grade) until they see how it affects their final grade. If my experience so far proves true, you can tell students about your system all you want, but until they see how it affects their grade, they don’t really get it. I’m sure, however, that describing my SBAR system to students and parents will be so much better next year now that I’ve got some concrete examples of how it works to show students.