Tag Archives: assessment

Blogging in the Science Classroom: The Worksheet is Dead

One of the major changes that I made this year was to switch to using individual student blogs as the centerpiece of student assessment (the other major change was to implement standards-based grading). I started using student blogs for a number of reasons including:

  1. I was tired of grading worksheets with the same copied answers on them.
  2. I realized that these worksheets weren’t always helpful in learning content, and in fact, much of the time they got in the way of learning.
  3. Student in my classes have access to a MacBook cart whenever they are in my classroom and we have fantastically dependable wireless internet connectivity for these laptops (yay tech support!).
  4. Blogging platforms like Blogger and WordPress are free.
  5. I’m increasingly wary of multiple choice anything as real assessment and wanted students to write more.
  6. I wanted students to have a permanent, online record of their achievement throughout the year, not some pile of papers shoved in a binder (or trash can).
  7. I wanted students to have an audience for their work that would include each other, their families, the community, and the world.

With all these highfalutin ideals in mind, we launched our blogs at the beginning of this school year, with some fear and trembling.  Very few students had done any blogging before, although a couple had existing blogs from their English classes. The first challenge was to get everyone signed on to one of the blogging services. Most students chose Blogger, probably because we thought that that would be easier initially since we all had Google accounts. The only problem was that, at that time, at least, Google Apps accounts like my students had did not work with Blogger very well. Students ended up having to create their own Google accounts just so they could use Blogger. This wasn’t a big deal, just not as smooth as if Blogger were integrated into Google Apps.

So how did we use the blogs? They became the go-to location to post assignments for me to read and grade. For a week or two, though, I operated a lot like I did last year, posting assignments on Edmodo and using its great assignment features to have students turn things in online, as well as posting them to their blogs. I realized that this was a duplication of effort and soon instead of sending out “assignments” in Edmodo, I just sent files and links as “notes.” This meant that these resources no longer came with a due date and that I was not using Edmodo to see who turned in which assignments.

Instead, I figured out how to work Google Reader to monitor my students’ blogs. After subscribing to each students RSS or Atom feed, I organized all of their feeds into folders in Google Reader:

Reader allowed me to keep track of when students published new posts and to quickly find a particular student’s blog if we wanted to discuss something that they had posted. We still used Edmodo extensively for communication, just not for assessment. For example, if students made changes to their blogs, the changes would not always be highlighted in Reader so I asked students to message me on Edmodo if they made changes to a blog post that I had commented on already.

Speaking of comments, I did not personally comment directly on each student blog post. I figured that other readers of their blogs could do that. Instead, I gave feedback about each post as part of the student’s gradesheet entry. Some comments were pretty general (nice job! or something similarly lame) but I got better (I think) at commenting and left specific advice for ways to change the posts to better meet the standards.

One criticism that I’ve heard about my grading system is that it doesn’t spell out for students exactly what they need to do to meet a standard. I think that would be a concern, except for the fact that I tried to provide constructive comments on most everything students did and I let them respond to the comments by fixing their posts for a higher grade. Students did have to make the first effort at a blog post to try to show what they have learned about a particular topic or skill. I worked with them from there to improve their understanding by providing comments and discussing their posts with them. I had a number of students say that this was their favorite part of my class this year: the fact that they could try out a post, get some feedback, and go back and fix it as needed.

What did students blog about? Everything, really. Most of it was even related to the class ; )   As students and I discussed topics or performed labs in class, those topics and labs found their way in some form into students’ blogs. Some posts were simple text-based blog posts but at other times, students used a variety of web2.0 tools to put “learning artifacts” on their blogs. These learning artifacts included the use of Prezi, Glogster, Quizlet, Google Docs, Photobucket, DomoNation, Xtranormal, bubbl.us, and other tools.

If you’ve viewed the example posts linked above, you may have noticed that different students used different tools to discuss the same topic. That’s because I did not require that a particular tool be used with each assignment. Students were free to use the tool that they thought would work best for that particular post. If you are interested in exploring the wide range of content and quality that was produced this year, here are the links to all the student blogs.

Here are some of the awesome things about student blogging, in my experience:


Since students used many different tools to create artifacts for their blogs, I was never bored grading their posts, and in fact, was usually incredibly entertained and impressed by what students can create given the freedom to do so.

Portfolios of learning

The blogs became a record of student achievement that we can look back on for proof of learning. Along with their color-coded gradesheet, a student’s blog is a powerful indicator of the level of understanding for any given topic or skill that we learned throughout the year.

Wide audience of readers

Many people ended up looking at the students’ blogs, not just me. For example, parent conferences will never be the same again, since it was so easy to pull up a student’s blog in order to view and discuss the student’s level of performance. Parents have access to the entire list of student blogs, too, so it was easy at conferences to point parents there if they wanted to compare how their student was doing to how others were. The kid who has three blog posts starts squirming in conferences when their parents see other students blogs that have 10 or more posts.

Student blogs were also publicized via Twitter or my blog, which led some traffic their way. At least one student and future teacher made lots of connections with the edublogging community this year.

Resources for each other

Not all students learn at the same rate or in the same way. This is one of those things about teaching that is easy to say, but hard to do something about. However, the blogs let kids work at their own speed and with tools of their own choosing.  Inevitably, some student posts were finished before others and became learning tools for those students who were behind the rest of the class. Towards the end of the year, when they were a bit more mature in the whole process, some students even started giving credit for their peers’ work that helped them write their own posts. It was very cool to see them learning from each other via the blogs.

There were some challenges along the way, of course, as we tried blogging our way through the year:

Blog writing is time intensive

If you want students to do a good job writing their own blogs, be prepared to give them plenty of class time to write, revise, and experiment with new tools.  Every year it seems I get to discuss less and less content with students, but this year saw a big jump in the time I had to allow students to have workdays on the computer so that they could stay current with their blogs. I wouldn’t have it any other way, but it will force me to look very carefully at what I have planned for next year’s classes.

Fair access to blogs

Part of the reason for spending time blogging in class is concern over the issue of fair access to the Internet in order to complete the blogging activities. Many students do not have easy computer access at home, although some do. I wanted to try to rule out any unfair advantage that students might have over others, but was only partly successful.  Of course a kid with his own computer and Internet access is going to have more chances to blog and make amazing products than another kid who has to rely on computer access during the 50 minutes I see them in class. I’m not sure that’s a reason, though, to not blog. Its more of a reason to agitate for more equitable Internet access in my community.

The Mac blogging platform is not as useful

There were some students, fortunately few in number, that for one reason or another, kept forgetting their Blogger account passwords and would get locked out of the system. For these few (maybe 5 students in all my classes) I set them up with blog accounts through our local MacServer. That let them use the same password as they used to log on to their laptop, but the advantages stopped there. We found that with the Mac-hosted blogs, there was no separate publish option, so as soon as a kid saved their blog, finished or not, it posted to my Reader. Also, we never figured out how to allow embedding within the Mac blogs so those students had to post simple hypertext links to the artifacts that they created rather than having them appear right in the blog page.


There was some plagiarism of blog posts, but it was usually incredibly easy to detect. The most obvious ones occurred when students simply lifted another student’s blog post and pasted it in as their own. I had one student, famous among teachers at our school for this sort of behavior, try this stunt about 5 times in a row trying to meet one particular standard. I simply refused to put any grade in her gradesheet until I was convinced it was her own work. Google searches and Plagium worked great for me in providing evidence that someone had copied material from a source or another student blog. I probably didn’t catch everything, and might jump in with our English teachers and somehow use Turnitin with the blogs to try to avoid problems next year.

Are blogs a rigorous assessment strategy?

One of the concerns that I had during the year was whether or not the new blogging paradigm is rigorous enough compared to the old model of lecture-worksheet-quiz-test-rinse-lather-repeat. This is a concern, of course, since I almost completely abandoned the traditional testing that I used to do (my Moodle site was very lonely this year).  Could I tell whether students were learning? Aren’t they just goofing around with web tools and having fun instead of suffering through the lectures that they need?

It was this article (via @mrsebiology) that convinced me that blogging can be just a rigorous as the tests that I used to give:

Rigor is the goal of helping students develop the capacity to understand content that is complex, ambiguous,
provocative, and personally or emotionally challenging.

Blogging in many ways is an incredibly difficult task for students. Not only do they have to research background information about a topic, they have to synthesize a variety of ideas together in a coherent piece of writing or media. They encounter interesting ideas about the course content and write about how these concepts effect their lives and society in general. In many ways, that’s much more rigorous than any test I could give about stuff that I lectured on.

The worksheet is dead. Long live the blog.

2010-2011: My Standards-Based Grading Year in Review

As I’ve written elsewhere, my focus this year has shifted from tinkering with educational technology to tinkering with, well, most everything else about my classroom. The main focus has been about changing how I grade students. When I started teaching I used the typical points-based grading structure where 10 or 100 point assignments are given and students rack up points towards a total. From there, as I got sick of the points game (can you say cheating?) and tried to limit it, I moved to a more streamlined system in which students still earned points, just fewer of them.  This year has seen the implementation of a standards-based assessment and reporting system, variously called standards based grading, or occasionally skills-based grading.

The main focus of this system is to provide feedback to students, parents, and myself about how students are performing on specific, predefined learning targets. This puts the focus on learning specific skills and content, not simple completion of tasks for points. So how was this accomplished? In short, I had to restructure my gradebook to reflect each major skill or content area in which I wanted students to be able to demonstrate proficiency. This meant that I first had to define the standards that I would assess students on. This task was not too terrible, since I teach a variety of concurrent credit (sometimes called dual credit) classes and had great guidelines from the college-level classes to pull from.

Next, I had to decide what tool to use to do the actual reporting of student progress on the standards. Our school’s online grading program was certainly not up to the task, so I designed a gradesheet in GoogleDocs instead. This let me set up conditional formatting of spreadsheet cells to use different colors to highlight areas of strengths and weaknesses. Once an appropriate gradesheet was created for each course I taught, it was a straightforward task to clone the gradesheet for each student and share it with them via their GoogleApps account. Using GoogleDocs also gave me the option to share students’ gradesheets with their parents as the need arose, since the school’s online gradebook really doesn’t show the detailed feedback that the gradesheet does.

Then, of course, came the real test of the system: actually using it with students. This meant having to explain standards-based grading to them during the first few days of class. Let’s just say there were lots of blank stares. Talking about standards-based grading with students was probably a lot like talking about dancing, you might get the general idea and like the theory, but until you do it for yourself, there is no real sense of how it works.

But students did figure it out and, by the end of the first semester, they had a pretty good feel for what their gradesheets were all about and were beginning to use them to guide their learning. I began to hear the language of the classroom change a bit to where students began talking about which standards they still needed to meet instead of asking how many points an assignment was worth. Some students even asked me to give other students access to their gradesheets so that they could discuss them together and figure out what steps to take next.

It wasn’t all rosy, of course. Some students were so used to a points system that the idea that one unmet standard could lower their grade was really foreign to them. Even some of the higher-performing students, used to building up a surplus of points, had to think a bit differently. But most students caught on, and many seemed to really enjoy the flexibility of the system.

Here is a quick rundown of the things that impressed me about a standards-based grading system:

Guiding instruction

If you really want to know what your students are learning, try laying it out visually in your gradebook. For me, the big aha moment came after several weeks of school when I realized that there was no evidence of one of the major standards, Experimental Design (Std 4), in anyone’s gradesheet. Why not? I hadn’t provided them any opportunities to meet that standard yet. After that, I tried to plan activities that would help students design their own labs. I struggled with that standard all year, actually. Looking ahead to next year, I know for sure that one of the areas that I need to work on is to get students more involved in performing real scientific investigations.

Informing students of specific strengths and weaknesses

After several weeks of trying standards-based grades, it became obvious in the gradesheet what I knew from my experience of teaching over several years: each student brings a different set of skills with them to my classes. Some students were rocking the technology savvy standard (Std. 6) with their prezis, videos, and animations while others were brilliant writers that were at high levels for their communication standard (Std. 7). Each student gradesheet was unique, but having the gradesheet as a reference made conversations with students about their grade much more meaningful than simply saying “you have to work harder”or “just turn stuff in.” We could see exactly which content or skills each student needed to work on.

Allowing for mistakes and experimentation

One of the great things about standards-based grading using a 4 tier scale is that students don’t dig themselves into holes like they can in some points systems. Using cumulative points, the kid who forgets to turn in an assignment loses points and their grade suffers (sometimes drastically), unless you later give “extra credit,” which is usually unrelated to any real learning. Instead, standards-based grades separate out the areas of difficulty into discrete chunks which can be addressed individually without necessarily dragging down the entire grade.  My students were allowed multiple chances to meet each skill or content area standard, a fact that they really appreciated. This meant that students could botch a quiz or try some web tool that didn’t work, but they could try again with a different assessment to try to show an increase in their ability or understanding. For example, here’s part of a gradesheet from a student who fixed some misunderstandings:

In at least three of the standards (columns), there is evidence that the student performed better on a second try at each standard.


Showing student progress and achievement over time

This is perhaps my favorite part of using standards-based grades and the individual gradesheets. Each gradesheet starts out the year as a blank slate, but as we work together through the year, encountering new challenges, students begin to see a color-coded record of their achievements, sort of a trophy case, perhaps, of all that they have done throughout the year. Yes, numbers are involved (0-4) just like a regular gradebook, but there is something about color that draws the eye and paints a picture of what has been achieved in a way a numerical score cannot. For example, here are the content knowledge (Std. 1) gradesheets for a few of my students:

Evidence of content knowledge 1

Content area knowledge learned by Student A

Content area knowledge learned by Student B

Content area knowledge learned by Student C

You can see at a glance that Student A had some strengths and weaknesses throughout the year, Student B showed excellent understanding in all that they did, and that Student C struggled to produce evidence of learning for a number of content areas.

I’m going to let my students have the final word in this discussion of standards-based grading. I asked students in my biology classes to produce a short “advice” video that I could share with next years students to make the transition to SBG easier on them (and me). Here and here are a couple of the videos that best explain what students think about my grading system. I love the quote at the end of Cherlyn’s and Tenchita’s advice video: “Its different, but you’ll get used to it. Its better than anybody else’s.”

Social media and the death of “standardized” testing

Student use of social media invalidates state and national standardized testing results. Here’s why:

When students take our state-mandated standardized assessment, CSAP, they do so within a “testing window.” This means that students are sometimes taking the test days or even weeks apart from each other in different districts across the state. Students see the exact same test items, as far as I can tell, so that all schools can be compared on an equal footing. And that’s where student use of social media begins to make things unequal.

Here’s the problem: There is an incredible focus on staff ethical practices regarding “test security” but little to no mention of how to regulate student discussions about test items.

We certainly can and do ban phones during the test so that kids taking the same test at the same time don’t text each other answers. But what happens once the test is over and students get their phones back or they go back home to their computers/iPods/etc?  A phone/electronics ban reduces direct imaging (photography) of the actual test items and direct synchronous communication during the test, but it does not stop information about the test from finding its way online.

Lets say for example that Student A runs home from their test, gets on Facebook, and updates their status to say “Whew! CSAP testing is over for today. Gee, that math section 2 sure was hard!” Maybe this is not a big deal, but what if Student B  says “Gee that science section 1 was hard! All of those questions about (_______) drove me nuts! What did you put for #3?” Other students might join the conversation and explain whatever it was that student B missed. And then we have a problem. Once information about the tests they just took is online in their Facebook status or their latest tweet it becomes a permanent, searchable, and replicable (Danah Boyd‘s terms) record of the test items and students who have not taken the test yet are the invisible, unintended audience.

Due to their friendly banter on Facebook, Students A and B and their friends have just violated three of the “ethical practices” that apply to teachers (from this year’s CDE Proctor’s Manual (pdf), pg 6-7):

Presenting items verbatim or paraphrased from the assessment (not ethical)

Telling students the correct responses or allowing them to discuss answers among themselves (not ethical)

Allowing the use of notes or other materials that may give students an unfair advantage (not ethical)

Why include the last one? They weren’t passing notes during the test! No, but what they did create is an online body of knowledge that other students who haven’t taken the test yet can use to prepare for the test. If students in Denver take the test a week before we do, I’ll bet that my students can find some interesting hints about what’s on the test.  And if we were unlucky enough to schedule our tests before others? Well, I guess we are helping them out instead. Can everyone in Colorado publish when you are going to be CSAP testing next year? ; )

Because of the asynchronous nature of the testing, and this would be especially true for nationwide testing, all of the ethical violations listed above will occur through social media and will differentially effect the outcomes of the test.  The bottom line is that no one really has “standard conditions” for test taking anymore. The outcome of these so-called “standardized” tests not only depends on your abilities in a particular subject, but also on your abilities to comb through social media sites for hints about what to expect. And what about those poorer students or entire districts who may not have access to all these wonderful “hints” about the test that are floating around Facebook and Twitter? They’re out of luck, apparently.

So how do we stop this breach of ethics and return our tests to “standard conditions?” We could ban/block Facebook, Twitter, and other social media sites, but that’s been tried in a few other countries lately and doesn’t seem to work : ) We could have students sign some sort of nondisclosure statement the same as teachers do (No, really. We do.). I doubt, though, that anyone could monitor every kid’s account for possible disclosures of CSAP items. Even if we could then there would be accounts called CSAPPirate or PassMyCSAP popping up to spread information around. There is absolutely nothing to stop students from sharing information about tests that they just took with students everywhere else.

Maybe the answer lies with the testing companies.  Maybe they should (do?) create multiple versions of the test with different questions for different regions of the state or different test administration dates. I’m sure they would love to charge taxpayers for the extra work that that would add. Besides, wouldn’t that make different sets of “standard conditions?” Yuck.

I suppose we could all take the same test at the exact same time on the same date (with our phones off, of course). That would be especially fun to coordinate in the event of nationwide tests.

The question that we need to answer is this: are there ways to tell which students have received advanced knowledge of test items through their social networks? If the answer is “no,” then state and national testing is not “standardized” for our students because access to technology will, at least in part, determine their degree of success. Due to differential access to test prep programs and private tutors, testing probably hasn’t really ever been “standardized,” but social media has made it even less so.

“I felt like I was teaching myself!”

Since I teach several science courses that are concurrent with similar courses at our local community college, I have the chance to be formally evaluated by students each semester as do all “regular” college faculty. The most recent batch of evaluation results turned up in my box a few days ago, and I was eager to see what students were saying about my classes. I’d given a survey a few weeks into the school year, but the results of these evaluations would be another chance for me to gauge student reactions to this year’s implementation of skills-based grading. As soon as I could, I cracked the envelope containing the summary of student responses and read what my students had to say.

I’ll skip over the numerical averages for my “performance” (this is a mostly standards-based blog after all) and cut right to my favorite part: actual student comments. Overall, the written comments were very well thought out and were pretty positive about my class (Anatomy and Physiology in this case). One comment in particular stuck with me and I’ve been trying to figure it out. Its the one I used for the title of this post: “I do not think the grading system was appropriate for the course. I felt like I was teaching myself!”

Are students really “teaching themselves” in my skills-based grading system? Does this comment mean that I run the sort of classroom where the teacher sits at their desk while students run amok? Does it mean that students feel there is no direction to the class? Those issues would certainly be worth fixing, if that is indeed what my class is like.

This comment came right after a couple others in which students claimed to be disappointed that we weren’t using worksheets very much and were using too much technology. Taken together, these comments highlight the fact that at least a few students are uncomfortable with how they are being assessed in my classes. In fact, there were a couple of low votes in the “fairness of grading” category that I can only assume came from the students who wrote the comments mentioned above.

I’m left with some confusion, though, as to how to help students who are not taking advantage of the structure of my classroom. What to some is “teaching themselves” and a lack of worksheets and lectures has been a very different experience for many others who have embraced different ways to learn and to show that they are learning in my class. Some students treat me as their coach for learning the course content and skills, but many students are still wrapped up in getting a good grade, passing the class, or simply not failing it. I’ve taught too long the way some students expected, with worksheets turned in for points, often copied from neighbors and not true products of learning. Some students were clearly expecting more of the same and are still parsing out how to achieve a “good grade” without doing much learning.

I’ve got some work to do, obviously. My first step will be to look carefully at my instructional practice to make sure that I am supporting students as fully as I can for them to be successful. If that is in place, then I’m going to move on to the bigger job ahead of me, that of tackling “the system” that makes completion of assignments equal to measuring learning. Part of that work is happening right now, as I write this post to proselytize for a careful reassessment of what we do in our classrooms. If I can convince some or all of my colleagues to stop giving grades for completion and maybe even get them to try some sort of standards-based assessment and reporting system, then students should arrive in my classes already expecting to be held accountable for their actual learning.

In some ways, “I felt like I was teaching myself!” is the most complimentary comment of all. If they are learning to teach themselves, then I’m on the right track. If students can leave my classroom knowing how to learn, I’ve done my job, because I won’t be part of their lives forever. They’ll have to be able to do it on their own, and they might as well learn how to learn now before it really matters in college or their careers.

On why standards-based grading isn’t enough to transform a classroom

Mediocre Physics Teacher has an interesting question for the SBG crowd:

The worst epithet an SBG teacher can hurl at another teacher seems to be “Your grading is nothing but a game for points.” I don’t understand how replacing 70s, 80s, and 90’s with collections of 2’s, 3’s, and 4’s changes the motivation of college-bound students from achievement toward learning. I don’t understand how it’s not points.

There are two issues to address here: the grading system itself and the level of motivation of students.

Is it possible to do SBG where its still just about points? Sure, if your assessments of learning suck like mine often do (did?). For me, implementing an SBG grading system isn’t what transforms what I do. It’s mostly a new structure to my gradebook.  I could theoretically take every assignment that I gave last year and shove it into a standards-based category in my gradebook to spit it back to kids this year. This wouldn’t be a shift in how I teach at all. Kids would still complete the same worksheets and study guides that I used to give out,  but they would just find weird subscores written on each one for each standard that the worksheet met in the gradebook.  They would play the same games of copying their neighbors work without putting much thought into the assignments, because no real thought was needed for some of the stuff I used to grade for points.  Its not about points, its about crappy, weak assessments.

What  needs to happen to transform your classroom is a very careful weeding out of what finds its way into your gradebook.  If you are still giving out worksheets and study guides like I do, recognize that they are practice activities and shouldn’t be in the gradebook at all.  If a kid doesn’t complete it, that’s their missed chance to learn the material, or perhaps they’ve found another way to learn about it through some other resource. There’s this thing called the Internet these days that has way better learning activities than half of the stuff I throw at my kids. These sorts of practice activities, like homework, webquests, and study guides don’t need to be graded.

Next, kids need to be doing lots of formative assessment before they hit anything that is going to become a permanent fixture in their gradebook. For me, this takes the form of student blogs. After the practice activities are over and they have some new learning to show off, my kids head to their blogs to tell each other about it.  Posts on each student’s blog reflect their current understanding of a topic. If that understanding changes, then another post is in order or corrections can be made to the original post. Its not set in stone: everything is editable. If a student wants to “reassess,” they write another post. We do a few quizzes and tests, but since the best test questions are of the free response variety anyway, why not let students write all the time whenever they want? Throw in some spicy, fun web 2.0 tools and some students will produce artifacts for you like crazy. I keep tabs on students’ blogs and write comments and a “grade” that I think represents their current level of understanding of the different standards. This “grade” is very fluid and represents formative assessment. I put it into our school’s online gradebook for parents and students to see, but they know that it can fluctuate a lot before the end of a marking period.

There is some summative assessment (a.k.a. big tests) that happens towards the end of each quarter in the form of a midterm or final exam, but those are not nearly as important to the students’ final grades as are their efforts to explain their learning in their own words.

Back now to the second issue raised in the quote above: motivation. If a student’s grade is the sum of all their points, they will try for more points to add to the total. If a students grade is the sum of all standards where each and every content and skill standard matters for the final grade, they will try to provide evidence that they have learned each skill.  I highly recommend abandoning (or subverting) grading programs that average a student’s numerical scores. Each and every standard should be considered separately.  That way the goal of each student is to demonstrate mastery of each standard so that no unmet standard pulls down their grade due to lack of effort to understand that topic.  It works that way about 80% of the time with my students, with an unfortunate few unwilling to put forth the effort (samjshah has a great rant about that here).

In summary, get your kids used to the terms “practice,” “formative assessment,” and “summative assessment.” Do lots of the first, keep track of the second in a flexible sort of system, and only sprinkle in the last when you feel its really needed. If you want to do this in an SBG system, so much the better, because then you can more easily keep track of where students are at on specific learning standards and learn what you need to do as an instructor to help them grasp the important ideas of your discipline.

Skills-Based Grading: Trying to Avoid the Standards-Based Tag

Regular readers of this blog know that it was only a matter of time before I came up with a gimmicky new term for what I’ve been trying to achieve in my high school science classroom. I think names are important when I discuss what I do as a teacher to improve my instruction.  I’d like to avoid the label “standards-based” because it has so many different interpretations lately.  I was never sure that I was trying standards-based assessment the same way as other folks, many of whom had many content standards like a checklist to be marched through over the course of the year. I’m very sure that I’m not doing it the way some state boards of education would have me do it, with their state standards appearing in my gradebook and every lesson cross-indexed to which of their benchmarks I’m addressing that day. The “standards-based” movement has stolen the real meaning of what I do from that term so I’m going to coin another one that matches up better with how I operate.

I’m going to call the standards-based assessment and reporting system that I use in my classroom Skills-Based Grading (still SBG!) because that’s where I want the emphasis to be for my students: on developing important skills, not on memorization of content. I’ve had a semester now to watch how it works with students and I am thrilled at the success we’ve seen.

Here are the nuts and bolts of how I’ve arranged things for Skills-Based Grading:

  1. No daily assignments for points
  2. No homework for points
  3. Nothing for points
  4. Its not about points

Here is what it is about:

  1. Important concepts for each course I teach were determined by reviewing Colorado Department of Education science standards and Colorado Community College standard competencies.
  2. Important skills for each course were determined by reviewing the above sources, ISTE NETS, A Challenge to ACT (and be your best) by Paula White, and conversations with the outstanding educators in my Twitter PLN.
  3. The important skills turned out to be the same for all of my preps: research and fact-finding, lab procedural skills, experimental design, data presentation and interpretation, technological proficiency, communication, self-analysis, and cooperative learning.
  4. These 8 skill standards in addition to a single comprehensive content standard are the 9 gradebook columns in individual student GoogleDoc spreadsheets.
  5. The content standard 1 gets its own sheet within the student gradebook for separate tracking of student content knowledge progress.
  6. Any assignment that a student submits  is evaluated for one or more skill and/or content standards using a 4-point scale.
  7. Comments are left on the students’ gradesheets so that they may make changes to improve their work.
  8. A student’s final grade depends upon demonstrating achievement on every standard since standards are not averaged together.

Here are some links for the visual learners out there about how SBG worked this semester:

In practice, what this system does is create the opportunity for students to be rewarded for their excellent writing skills, their technological savvy, and/or their ability to help others in addition to showing that they learned that fats are really called triglycerides and that plants respire as well as photosynthesize. Content will always be available to my students whenever they go online. My job is to teach them how to access and interpret that content and my gradebook now reflects how well they are able to do so.

Some observations on SBAR in my science classes

Although I haven’t yet given it a catchy name like my previous Binary Grading grading system, my new standards-based system of assessment and reporting is working well. We are midway through the second quarter of school and I have enough experience with the system to step back and make a few observations about it. As with everything involving high school students, these observations could change tomorrow, but here’s what jumps out at me so far:

Volatility of grades: Students and I were surprised at some of the major grade swings that are possible in an SBAR system. I’ve had a few swing wildly between B’s and D’s and back, which usually doesn’t happen under a point-hoarding system in which assignments contribute to an average value that is hard to swing once enough points are built up. In my system, though, the nine major standards are reported independently of each other and all count so that poor performance in one can negate good performance in another. I like it, though, because it keeps kids on their toes. Some had begun to be complacent about their grades but a few forced reassessments woke them up to the reality that they may be called upon to continue to demonstrate mastery of each standard.

The role of the course content standard (Standard 1): When I was choosing my standards for this year’s pilot SBAR project, I chose to have 9 standards that were identical for each of my 4 preps because there are some skills that I wanted all my students to learn and demonstrate in every science class that they take. The only major difference between the biology, anatomy, chemistry, and AP Biology standards is in Standard 1, which is subdivided into specific topic areas unique to each course.  The intent was to 1) make a system that didn’t drive my students and I bonkers with 4 separate sets of standards and 2) deemphasize the content-related grade in favor of the skill-related grade. It is working quite nicely, in my opinion. Skills like analyzing research articles, experimental design, and interpreting experimental data are much more important in determining the overall grade than whether a student knows the difference between osmosis and diffusion. I’m happy with that.

The role of the 8 skill-related standards: The skill-based standards were really written for me, and not the students. I recognized some deficiencies in my instruction and basically tried to force myself to make changes by creating a grading system that demands that I give students the opportunity to assess skills as well as content. So far I am doing okay with this, but I am still more content-driven than I would like. More student-designed labs are needed in most of my classes, for example.

Death of death by testing: My tests and quizzes can be tough, given the subject matter I teach and I often see low percentage scores on some of the harder topics’ assessments. Regardless of whose fault it is, in a points system a low test score needs to be “fixed” by curving, throwing it out, or by some other fudging method so that the kid’s grade isn’t completely hosed. I used to curve or tweak point values so that some tests were not worth as many points, but that always bugged me, especially when I thought that I’d done a fine job teaching that particular topic. Now though, my tests and quizzes are just additional pieces of evidence to add to the mix. I integrate percentage scores from content-specific tests into the 4 point scale in a way that rewards the high achievers but doesn’t completely destroy the low-scoring kids. Its has worked well for me to have kids who score in the 90-100% range get 4’s, 80-90% get 3.5’s, 70-80% get 3’s, 60-70% get 2.5’s, 50-60% get 2’s, and below 50% get 1.5’s. I’m pretty satisfied with this part of the system as well, since the only students who are really nailed by tests are those who don’t show up to take them.

GoogleDocs rock the SBAR: All my record keeping is Google-ified. Student blogs are collected into my Reader, in which they are organized by class period. Evaluation of their blogs and other assessments is recorded in their own private Google spreadsheet with conditional formatting to show 4’s (blue), 3’s (green), 2’s (orange), and 1’s (red). Loving it! Its truly the best part of the whole grade system switch. The spreadsheet is shared with the student (view only, of course) and with parents as needed. I leave comments along with each assessment so that students have some guidance should they choose to reassess a particular standard. Sure it was a pain to set up over a hundred spreadsheets at the beginning of the year, but its paid off.

The gradebook shows what they know: Yeah, that was kind of the whole point. But it works. Students and I can glance at their Standard 1 sheet and point out which content areas they struggle with. We can look at their main gradesheet and point out which skills they need to spend more time on. As a communication tool, the SBAR gradesheet is vastly superior to the school’s online gradebook. Even though I report similar numerical data in both places, the constraints of the online gradebook and the freedom of expression (color coding, written feedback, smileys ; )  in the Google gradesheet combine to make the gradesheet much more useful and fun to use.

Future tweaks: I need to look closely at how the system is being implemented in the 4 different preps and make sure I’m providing enough chances at assessments for the different standards. Looking at student gradesheets is really assessing myself in a lot of ways because if there are major gaps in the standards that are being addressed, that’s really my problem, not theirs. Biology is working wonderfully, as is Anatomy, probably because I spend the most mental energy trying to reform those classes. Chemistry has a lot fewer assessments than I would like to see in the gradesheets and hasn’t met as many different standards as I think they should.  Mostly we’ve hammered the lab principles and procedures standard (Standard 3) really well since we do a lot of textbook labs in chem.  AP Biology is another beast altogether, because in some ways I think that everything we do in that class is formative assessment and rarely finds its way into the gradesheet.  My tendency with AP Bio is to use a lot of informal assessment (discussions with students) so we don’t stop and take quizzes and tests very often. This makes for a very empty gradesheet, and I’m not sure whether that is a bad thing or not. In a sense, the real summative assessment for that class doesn’t happen until May when the AP Exam rolls around. Also, having only 3 students in that class this year lends itself to a lot of one-on-one discussion, so this may not be the best year to judge the implementation of SBAR in that class.

What really hasn’t worked: I’m not happy with the way that the midterm exam results are reported to students. All my classes took midterm exams right after 1st quarter as summative assessments of their learning to that point. Their scores do not show up in their color-coded gradesheets since they are not part of the standards-based grade but instead only show up in the school’s online gradebook in the semester test slot. That’s the only way I found to report the grade, but I have a very strong sense that students don’t really understand the role of that midterm grade because it is buried in a part of their online gradebook that they don’t usually look at until after they’ve taken semester final exams.  I’ve got a bad feeling that students won’t truly realize that the midterm has the weight it does (7.5% of their semester grade) until they see how it affects their final grade. If my experience so far proves true, you can tell students about your system all you want, but until they see how it affects their grade, they don’t really get it. I’m sure, however, that describing my SBAR system to students and parents will be so much better next year now that I’ve got some concrete examples of how it works to show students.

Why I’m anti-rubric

Look around the edublogosphere (love that word, by the way) and you’ll see lots of folks proudly showing off their rubrics for this and that project. I’ve even seen a teacher brag about their rubric for a trash collection project. I’ve written a few myself. We must use rubrics, goes the common wisdom fed to us in professional development sessions and ed journals.

Why? There are some good reasons, I suppose. We are told that rubrics let students know exactly what the teacher is looking for on a project. They are also supposed to make grading more objective, since the teacher is using the same rubric for every student and therefore judging everyone the same.

Sure, rubrics might let students know exactly what a teacher expects and students might thus be guided to give the teacher exactly what they expect. This is a good thing for some students. But doesn’t this kill creativity? Even if you have a blank on your rubric for “creativity” do students really deliver it if you’ve pigeonholed them into a strict set of guidelines for the project? Rubrics kill creativity.

And if you have that score for “creativity” how objective can you be on your so-called objective assessment using a rubric? Lets not pretend that teachers aren’t influenced by factors such as presentation skills, speech patterns, neatness, tech savvy, and dozens of other unmeasureable factors that go into applying our rubrics “objectively” to students’ projects. Rubrics are just as subjective as any other form of assessment. More organized, yes, but still subjective.

In my one and only conversation with my principal about my SBG system, he asked if I was using rubrics to judge students’ blogs (because of course we need rubrics). I replied in the affirmative, because I do, just not the sort of rubric that he probably had in mind: one that takes hours to craft that shows every detail I could possibly expect students to need to know to be successful on the project.

Instead, I use this simple scoring system for absolutely everything students do in class:

4 – Outstanding demonstration of skills/knowledge that goes beyond what was taught in class.

3 – Solid understanding of the material and/or skills being assessed.

2 – In progress, but advancing towards the learning target.

1 – Needs lots of additional practice and refinement of thinking and/or skills.

This scale is neither original nor particularly imaginative, I admit. But it works. It has enough flexibility to be used in any assessment, be it a verbal conversation or video project. It is clearly subjective, which lets me give individual feedback to each student about their level of achievement so they can make changes as needed.

I love using this system to assess student blogs. Students have found so many different ways to show understanding of the topics that we have discussed and I am never bored reading the same project over and over. Each person’s voice comes through on their blog and isn’t hampered by trying to match what I want them to learn point for point on some rubric. Instead, they can show me what they have learned, which is far more important.

Grade police vs SBG: Does anyone win?

I’ve been struggling with having to cram my SBG system into the constraints of my school’s antiquated system of grades. As much as I’d love to say that I’m happy with the results, I’m not.

This week I’m faced with the task of finally making students responsible for a grade that will appear on a report card. Even though it actually doesn’t mean anything in terms of my course grades (I carry over standards scores from quarter to quarter), students seem to be finally paying attention to their standards-based grades and freaking out.

Before I get to the freaking out part, let me remind you that I am using standards-based assessment and reporting for the first time at our high school, as is my fellow troublemaker and art teacher Justin Miller (@boundstaffpress).  We deal with the school’s accountability system of printing student grades every week and threatening the wrongdoers (F-troopers), some of whom even get pulled out of class every Monday. I suppose they are given a stern lecture or some such punishment. Clearly the whole system is NOT student-driven and relies on administrators and athletic coaches being the big brother watching over students’ grades to tell them when they are off track.

Meanwhile, in my idealistic little corner of the universe, I figured that SBG was going to increase students’ independence. They would see their grades listed by standard and take charge of which types of evidence they wanted to use to show me that they were learning each standard. Sure, we would work on some core ideas as a class and take the occasional quiz, midterm, and final exam together, but overall, students would show me what they were learning more or less independently.

You see the conflict coming now I’m sure. What I forgot to take into account was how trained students are by our school’s systems of points and eligibility lists.  It turned out that as long as they were not failing my class, an alarming number of students assumed that they were doing fine in my class, even though their blogs and SBG gradesheets (reasonably frequently updated, I might add) were practically empty of any evidence of learning. Parents, too, seemed happy with student grades as long as the overall percentage “grade” was high enough, even though some standards were not yet met.

I had been reserving judgement on some of these students who seemingly had a hard time getting assignments done, preferring to keep their grade posted at a 62% D, which meant that for most of the quarter, no one was failing my classes. But this week, reality set in. I had to match my system to that of the school and punish the unworthy who were not doing their work. F’s in abundance arose all at once because many students had missed at least one chance to meet one of the standards.  And in my system, each standard counts Marzano/Buell conjunctive style. So one standard not met (there’s only 9, by the way) meant a stinky F appeared.

So now I have students flipping out and frantically trying to complete required blog posts that most students finished weeks ago. A few have even resorted to copying others’ posts in their panic to get off the ineligible list. Some are doing reassessments for a lab quiz that few students appeared to take seriously the first time around (sadly, it was the only assessment for the lab skills standard in that course so now their grade is hosed).

It probably doesn’t help that I don’t push hard and fast deadlines for blog posts so some students never wrote them.  It also doesn’t help that some standards were only assessed once or twice in the quarter so that I didn’t feel confident enough in students’ ability on that standard to give them a ‘real grade’ until just this week. Maybe it’s just that I’ve been teacher-centered way too often (the students I have this year certainly expected to be taught that way when I met them) and haven’t allowed students to be more in charge of their own assessments and reassessments.

That was my downfall this quarter- I bought into the system that said I needed to have specific assignments from students or else they have to report on Monday mornings to the grade police.

That’s what will change next quarter- I’m giving them their independence back.

We will do labs and activities to help them meet the various content and skill standards. The rest is up to them. They have a place to post evidence of their learning (their blogs). They know what general skills I expect them to master (the Standards). They know that they can succeed if they work hard on each standard. I’ll probably need to throw in “forced reassessments” (Matt Townsley’s term) along the way if some of the standard scores appear to be getting stale, but hopefully it won’t come to that too often if students are demonstrating their learning.

Then we can sit down together in December and have a real conference where they can defend a grade for the standards-based portion of the class grade instead of having to freak out at the last minute when “real grades” appear out of nowhere. Will I continue to “grade” along the way? Probably, since both parents and students seem to really key in on a number. The grade police, too, are watching. Let’s show them what kids can do with a little independence.

A minimalist standards-based grading system: dream version

Jason Buell got me thinking again with his latest post in which he gives some great tips for all the SBG newbies. A main point of his post was for us to not be too self-satisfied with our pretty lists of standards. Instead, according to Jason, we should be taking a close look at the assessments that we are going to use so that we can define our anchors and give concrete examples of good (and bad) work for students to follow.

Thinking about assessments, here’s what I realized that I needed to clarify about my classroom:

  • Will some (or all!) students be doing something unique to meet a certain standard?
  • Is it possible for one of my biology classes to decide to learn about a slightly different set of ideas about biochemistry than another biology class?
  • How do I go about writing the assessments ahead of time if these two conditions apply?
  • Most importantly: why did I write my standards and learning goals so broadly that they don’t drill down to specific content knowledge?

To answer these questions for myself and the occasional reader stumbling across this post, here’s how I picture my classroom in a couple weeks when school starts:

(insert dream sequence sound effect and shimmery visuals here)

Students will be introduced to the new system of assessment, we’ll call it SBG for now, in which points are not summed, averages are defunct (except in the inflexible beast of the school’s online gradebook), and the highest number anyone will see on an assessment is a 4. After the initial shock, the students and I will look at examples of what the record-keeping system will look like (in my parent lettersbgradebook.com, and a spreadsheet or two) and discuss the 4 level rubric and its descriptors.

We’ll talk about why we have major Standards and Learning Goals to focus us so it is not a completely student-driven system. (I do need students to meet the Colorado Community College Common Course guidelines for each course, if they are to deserve college credit for my classes. That’s why I have the Standards and Learning Goals that I do. They are borrowed directly from what the colleges of Colorado have requested as the SLO’s, the student learning outcomes, that students are to master.)

Then we will get down to the business of starting on our first units of study. Here’s where the classroom becomes intentionally unscripted, or at least less scripted than in past years. I hope to be the guide-on-the-side type and give students some freedom in what they study in my classes, so long as they are making progress both in the content-specific Learning Goals and the performance-based Standards.  The students and I will probably have a chat at the beginning of each topical unit to define in more detail the supporting concepts worth focusing on, both in my mind and theirs. From there, they will pursue their own paths to demonstrating mastery of the skill and content standards for that unit. Surely some Web 2.0 stuff will be generated. Some inquiry-ish lab experiments will be performed. Portfolios and blogs will be created. Much fun will be had by all.

(insert exiting dream sequence sounds and return to reality visuals here)

So that’s what my classroom might look like, based on a vision derived from my summer reading and the communal brain that was ISTE10, that students need to be producers of content and they need to follow their passions whenever possible.

With this sort of idealistic, student-driven philosophy, I don’t think I can write many assessments between now and when school starts.  I haven’t met my students yet.