Hi-Five Machine

 

How do we reconcile the freewheeling spirit of makerspaces with the traditional sit-and-get, control-freak management of most public schools?

Makerspaces are trendy at the moment as evidenced by articles like The ‘Maker’ Movement is Coming to K-12: Can Schools Get It Right? and The Nerdy Teacher’s new book. The Education Week article reminds us that many K-12 teachers are turning towards “making” and away from standardized curriculum and testing. I’ll be getting the book soon to see what Nick has to say as well.

But what exactly does “getting it right” look like in a K-12 setting? Should we enable student-driven learning and a “do-it-yourself, only-if-you-want-to ethos” like the Maker movement? I won’t claim to be doing it right, but I am trying out something. You can judge for yourself and perhaps take away some ideas to try.

Since 2011, I’ve been creating a makerspace in my high school that goes by the official course title “Physics” but sometimes gets referred to as “Phunsics.” Basically, I allow students to design and build whatever projects they want to, within some constraints of budget and safety. We’ve built everything from boats to rockets to Arduino-powered pianos. Students and I have had our fair share of successes and failures with the course, but it has grown immensely in popularity and I now run two sections of the course during the school day.

If you are thinking about getting onboard the Maker movement, here are the challenges that I’ve faced in building a DIY makerspace inside a traditional public high school physics course:

Lesson planning: A DIY philosophy does not co-exist with lesson plans that tell students what to make. I’ve started the year with some pre-planned projects like the Marshmallow Challenge or the Physics 500, but after the first week students are on their own and no lesson planning occurs. Instead, my role becomes that of coach and advisor and my primary job is to help with technical questions, keep students focused on their projects, provide materials, and maintain a safe construction environment for all. If your school requires you to turn in daily or weekly lesson plans, be prepared to explain why you don’t have any.

Multiple simultaneous projects: A DIY makerspace will allow students to follow their own interests. This means that with 18 or so students per class and two class sections, I’m looking at managing around 10 unique projects, and that assumes that students only work on one project at a time. Be prepared for a lot of mental gear-shifting as you help manage a diverse set of projects.

Lack of teacher expertise: I quickly found that student interests do not always line up with my strengths. This pushed me into uncomfortable territory at times. But I have had a huge opportunity to model real learning for my students as I tackled my lack of knowledge and skills along with them. A makerspace allows (forces?) you to model your skills as a lifelong learner for students. Several alumni of Phunsics have reported back that they appreciated the makerspace because they learned how to learn by taking the class.

Physical space: With 5-6 projects per class period, I have run into the lack of physical space in which to operate. This is especially true if students build a full-scale trebuchet or go-cart (been there). To solve space issues, I have had students working in no fewer than four different classrooms simultaneously (my room, chemistry/physics lab, outside, and shop). Be prepared to run around like a crazy person to keep track of where students are and what they are up to. You’ll need to think about the tools required, where they are located, and storage of the projects themselves. Chances are you’ll need to be very flexible in terms of what constitutes the makerspace “classroom.”

Behavioral issues: With great power comes great responsibility. Not all students will play nice with the departure from their normal classroom jail cell, especially if said jail cell is now spread out over two or three workspaces with one teacher. Typical teacher management strategies like busywork and pop quizzes don’t work when the content of the class is student generated. Instead, relationship-building and the occasional behavioral intervention are the tools of choice.  My general sense is that I have the greatest behavioral issues with those students who are either unwilling or unable to develop projects on their own and expect me to feed them projects. I usually deal with such situations by pointing students to Instructables and having them pick two or three interesting projects to mimic. Generally though, student groups form around one or two strong leaders that can usually pull the weight of project creation and implementation and keep everyone in the group busy. I also use the Google model of 80:20 time (80% work, 20% creative play) which works pretty well, especially when students are reminded that they are over their 20% goof off limit.

Supply shortages: A makerspace is student-driven, which means that student projects will be varied in their material needs in both consumables and in equipment and tools. From week to week, I don’t necessarily have a clue as to what materials we might need down the road for projects, because students have not communicated a need for them yet. We are in a constant cycle of brainstorming, materials purchasing, and production, and often times its the purchasing step that is the delay. If the project requires hardware and lumber then students or I can get to the local hardware store pretty quickly. But if we’re building an Arduino-powered weather station, then we are going to have to wait until parts arrive in the mail. This is especially problematic for schools like ours in a small rural town with few major stores and relatively limited budgets.

Non-traditional assessment for traditional grades: Given that my makerspace exists inside a traditional school, letter grades need to be issued to keep admin and parents happy. My grading scheme for Physics resembles an interview in that when end-of-term grades are due (and along the way for sports/activity eligibility) I ask students to defend what grade they think they deserve. They are required to explain which projects they have worked on and what their individual contribution to group projects has been. We also have a set of grade criteria that are negotiated at the beginning of the school year. This year’s grade level criteria can be found here.

Documentation of work completed:  I was challenged early on to keep everything that we do in the class as public as possible, and we’ve mostly succeeded in keeping up with our social media responsibilities. At first I kept a separate blog on the trebuchet project. Some years students have kept a class blog like https://phunsics2013.wordpress.com but lately we have moved away from blogs. We currently have a blog or two (here’s one) but the major posting of student work is happening at LJHS 3rd Hour Physics and Sausee Phyx on Facebook.

I’ve learned a lot over the years of running this makerspace and have become a much better Maker myself. While its frustrating sometimes that student motivation can be an issue even in the most student-powered course on campus, I’ll continue to keep on offering this space where students can learn how to learn. Keep an eye on our Facebook pages for details of our future shenanigans.

Tags: ,

I had the privilege of attending the recent Badge Summit in Aurora, Colorado which managed to pull in a bunch of badge geeks right before ISTE 2016. Why was I there? Curiosity about badges, I suppose, but also a sense that I need to change things up in my instructional design.

I’ve been doing the eportfolio thing for several years now and have come to realize that no one but me is seeing my students’ work, even though it is in an online space and can be made public. I’m looking for ways to have my students be recognized for their work in a way that transcends my silly grading scheme and the simple letter that can be seen on a report card or transcript.

Open Badges seem to be a way to accomplish that. As I understand badges at the moment, there are organizations out there that will help me to create and issue badges that are linked to evidence that the student provides. Most importantly, organizations such as the Common Application have recently begun collecting badges from students who want to show off particular skill sets to colleges and universities.

Adding badges on top of our existing portfolios could essentially create a new, more public layer of visibility for student learning. This means that I need to examine the language and standards that live in our portfolios and figure out how to issue badges that will be meaningful to students and to their audience, whoever that might be.

There is a great set of guidelines for badge creation to be found at Aurora Public Schools, who have run a pilot badge program in a large, urban public school system for a couple years now. As I got my thinking cap on about what my badges would look like, I went back to their guidelines:

  • Does this Badge provide rigor for our students?
  • Can the student demonstrate this skill independently?
  • Has the student had multiple opportunities to show this skill?
  • Is the Badge evidence based?
  • Is the Badge transferable?
  • Is the Badge based on a small/granular skill?

For me the sticking point was the requirement for badges to honor a small/granular skill. I’m generally a big picture guy and despise trivial details, but I realize that badges need to have some granularity to them in order to be meaningful. I set about digging into what students might earn badges for in my courses and came up with the following two lists, one for Biology and one for Anatomy:

Biology portfolio to badge map

Anatomy portfolio to badge map

The darker blue bubbles represent more granular topics or skills that might be more amenable to badging than the big 7 portfolio standards under which students currently collect their work. The challenge now will be to see if these lists of potential badges will be a workable framework from which to start designing and eventually issuing badges.

Tags: , , ,

Regular readers of this blog will have perhaps noticed the complete silence in this space since around January of this year. This particular post is an attempt to dissect why a reasonably regular, once-a-month edublogger would drop off the face of the blogosphere and Twittersphere.

Let me start by saying that I’m not the only one. I’ve seen a decent number of Tweets and posts that basically have the same message: where is everyone? By “everyone” we are referring to those educator friends that we’ve connected with online. We just don’t connect as much anymore. Twitter has become a place to promote your particular brand or organization or to sell your latest book. The rapid-fire exchange of ideas between educators is still there, but the signal-to-noise ratio is tipping in an unfavorable direction. Maybe it’s also because I found other things to do and other communities (looking at you XBox and Destiny) but I haven’t spent nearly as much time in meaningful Twitter chats as I used to.

But if I get honest with myself, I think the real reason that I’m not on Twitter or blogging about my life as an educator is that I don’t have anything new to promote. My use of student blogs and digital portfolios is a steady presence in my classroom, a comfortable and effective way to gather student work. My system of standards-based grading is quirky, but reasonably mature, having been tried and tested since 2010.  I’m feeling like my classroom MacBooks look these days: mostly functional, but old and grimy, with a few missing keys. Nothing special to see here, move along.

P1040552

 

So what is there to write about? What new wisdom do I have to pass on to you, dear reader? Is what I do as an educator worth writing any more about?

At this point, if you find this blog valuable at all, you (and I) need to thank a few folks for nudging me to pick up the proverbial pen once again, even if I don’t have any shiny new educational initiatives to sell you.

First, I noticed that people still read my blog, and even assign it as part of student projects. My blog stats led me to these teachers, who have assigned my internet famous SBG is (not) a Fad blog post as a resource for their students’ final Honors English project. It’s super rad to see teachers involve students in the debate around whether to switch to SBG. Great stuff, and I’m glad to continue to be a part of the discussion around grade reforms.

Next, just yesterday I got to build a robot with fellow teachers from some nearby school districts as part of a training for a robotics competition that I’ll be mentoring in the fall. Right at the start of the day, one my team members whom I had not met before mentioned that he reads my blog. Dang, an IRL person reads my stuff. I’d best get to updating the blog then. Thanks, Chip!

Finally, today I woke up to this amazing comment from a former student, Jeremy. Educators love to hear back from former students, which is especially true in this case since Jeremy spent a lot of time struggling to keep up with blogs and portfolios in my classes even though he is flipping brilliant. A short version of the comment would be something like this: keep doing what you are doing, it’s working. Or that’s my interpretation, anyway. But you should really read the comment for yourself, so click on it now.

I’ll leave you (for the moment) with a final thought: sometimes what we do as educators can feel trendy and hip and on the cutting edge, but that Edge will move away from us as new technologies and techniques arise. Deal with it. Take a close look at what is really working for your students and what isn’t. Wade through the shiny stuff to find that old, grimy core set of truths that got you into teaching in the first place, and see if you can carry those truths on to the next school year and the next.

-C

9781435114937_p0_v2_s192x300How much detail should a high school Anatomy student be expected to master? Is it the same level of detail as a college student taking a similar course? Why or why not?

I teach Anatomy and Physiology at the high school level and offer that course as concurrent credit with our local junior college. The current arrangement is that one semester of high school anatomy grants one semester of college credit (after earning a C or better). A student interested in going into nursing or medicine can leave my Anatomy course with 8 college credits and get some prerequisite courses out of the way.

I recently had a great talk with the anatomy instructor at the junior college about how we run our anatomy courses. I showed her the portfolio-based documentation that we use and also looked at some of the tests that I give.

My overall impression of the differences between the college and high school anatomy courses from this discussion can be summarized as follows:

  1. I spend a lot of course time having students design labs that measure various aspects of human physiology. This does not appear to happen as often at the college level.
  2. My evaluation system is primarily based on collecting evidence of performances of practices of science rather than a focus solely on content-area vocabulary and concepts. Grades at the college level seem to be solely determined by exams that test content-area vocabulary and concepts (including my own junior college course that I teach in the summer).
  3. The pace of the high school course is slower than that of the college course, so much so that some concepts that should be “covered” in the fall semester (according to the college syllabus) are not encountered until spring and spring semester units are very short compared to the college.
  4. The level of detail (vocabulary and concepts) that college students are expected to master far exceeds that seen in my high school classes. An example would be the muscular system unit for which I have students learn muscle physiology, but not the names and locations of most major muscle groups, as the college does. We do learn many muscle names and locations through the cat dissection later in the year, however.
  5. The college instructor reported that a large percentage of students drop the course in the first few weeks whereas my students generally stay in for an entire semester, if not an entire year.

TL;DR of the above conversation: High school anatomy class is being taught very differently than the college version, but for the same credit.

Is this a problem? If so, what are possible solutions?

Most likely the junior college, given the current focus on its accreditation review, would consider this a problem. Students not on their campus are being granted credit for a different set of work than those on campus.

But is different “bad” or undeserving of college credit in this case? Maybe. It depends upon what we are issuing college credit for.

What should the goal of an Anatomy and Physiology class be? We should award “credit” based on whether these goals have been met or not. I can think of at least a few possible underlying philosophies that we might apply as the stated aims of this course:

  1. Students learn about the structure and function of their own bodies so as to make healthy, informed choices both now and in future medical care for themselves and their families.
  2. Students practice lab design, data collection, and scientific argumentation in the field of human anatomy and physiology.
  3. Students gain a solid understanding and appreciation of medical concepts that will inspire them to pursue a career in the medical field.
  4. Students learn detailed medical terminology in order to pass future examinations such as the MCAT and nursing boards.

Right now I operate my class from a mashup of the first three, with a minor emphasis on the 4th. I am almost certain that the college course primarily follows the 4th philosophy.

How then do we reconcile the issuance of credit for these very different goals? Ultimately, the college holds the trump card in that they are the issuing authority of the credit. If they decide that high school students should take the same exact exams as the college students, then that level of detail will need to be taught and the pace of the course quickened, probably at the expense of lab experiences.

But should it? I’ll end with this pondering:

Would it be better to not offer concurrent credit for this anatomy course and continue to focus on goals 1-3 or should I move the concurrent credit course towards a faster-paced, more test-prep focus to match the college more closely?

Comments welcome, as always.

-C

Tags:

I have sat through many professional development sessions where I’ve been bored out of my skull. There was the one about gloves and writing a five-paragraph essay that gained me nothing more than a single gardening glove with some sharpie scribbles on it. There were the several sessions where I walked away with a giant binder that I never opened again. After several of these useless meetings, one of my colleagues and I started using the acronym “JJT,” which of course stands for Job Justification Theatre.

There is a little JJT in every PD session, to be sure, but sometimes the PD really did justify the job that was being done, especially when the training happened to be useful for all the staff in the building. Maybe it helped impart a shared vision of expectations or served as a sounding board for staff concerns.  Sometimes we even (gasp!) created the PD ourselves and had staff members run it.

Our building’s approach to PD this year has been wildly different, however. The emphasis, perhaps in reaction the the excess painful PD seat-time of previous years, has swung to individualized PD. Everyone gets what they need, as long as they find it themselves. Gone are the district-wide (insert edu-brand name here) trainings and other randomly generated PD topic sessions. This is the era of do-it-yourself PD.

Which is not a bad thing, unless we all travel our separate ways and never meet again.

And that’s why I’ve learned to love meetings. In meetings I get to chat with staff members from other departments to see what is working for them and what they are thinking about. Sometimes its like what I’m doing and we compare notes. Other times they are off on a radical new path that I need to know more about. But I wouldn’t have known about it by simply following my own interests.

I totally understand that the science department is going to have some different-looking PD from social studies and that those types of trainings will be department-specific to be really useful. I don’t necessarily want to learn about historical criticism.

But there is a place for creating and updating a vision for education that transcends individual topic area departments in a high school. How do we understand how kids learn and how can we adapt instruction accordingly? How do we provide quality feedback on student work? And my recent crusade: how do we assess and grade students in a way that is fair and doesn’t penalize them for the speed (or lack thereof) at which they learn?

These are topics that one person or department cannot simply address in a useful way without bouncing ideas off of the entire building. Believe me, I’ve been reforming grading practices in my classroom for years but those weird ideas have yet to gain much traction with many other faculty here. I’ve had more discussions with folks here on the blog and on Twitter than in my physical reality.

So please, take some time to meet with your faculty and brainstorm on topics that you all need to address as a staff. I’m going to make that my goal. Its time to schedule some meetings.

 

Fanboy Ludwig with Neil Shubin at NSTA15

Fanboy Ludwig with Neil Shubin at NSTA15

Ever have one of those “mountaintop” experiences or events that at the time feel so important and life-changing that you wonder if you’ll be the same person on the other side of it? And then did you come down off the mountain and get back in your proverbial or literal car and return to your regular life? I think we’ve all been there a few times, perhaps at summer camp, a mission trip, or the tent revival at the local church.

For me, the latest such mountaintop experience was getting to attend and present at the national NSTA meeting in Chicago this past March. Seeing my name in the program just a few pages away from Neil Shubin and Bill Nye practically qualified me for rock-star status, at least on paper. The conference was amazing, as you’d expect, and I had a wonderful time giving my talk. The folks that came to see me (the mile walk!) were amazing and included a lot of great Twitter friends who stuck around to chat and make connections afterward.

I left NSTA15 feeling like I was on the right track. I’d presented at a national conference and didn’t make a fool of myself.  Many teachers seemed inspired by my ideas. Every talk I went to about the NGSS pointed towards needing new ways to assess student performances of science, which my portfolio-based assessment system clearly does. From all that I saw there, I was on the leading Edge of thinking about new ways to collect, analyze, and share meaningful data about what students know and can do.

I’m not sure what I expected to happen post-conference, but it basically didn’t play out as expected. I didn’t see the major leaders behind NGSS express any interest in portfolio assessments, nor did I hear any encouraging news on that front from Arne Duncan. My Twitter stream continued to be the flood of info that it used to be, but, aside from a few high quality interactions, it felt more stale and repetitive than usual. I was not really greeted as a local hero upon my return to my little town, save for a few close friends. In fact, I have yet to be invited to share my work with the district staff, most of whom know very little about what I do.

Now in all fairness, none of these things would ever realistically have happened. Much of the push for NGSS is linked to companies who want to sell us more tests, so change to new assessment types will be slow on that front. Twitter is a hot mess of the good, the bad, and the ugly even on a good day with amazing connections like I have. My local district was in the middle of incredible political turmoil with a witch hunt targeting the current (now former) superintendent, so a little side-show theater like mine would hardly draw an audience.

Bottom line, I came down off the mountain pretty hard. I did some consulting with a few folks who are trying out portfolios this coming school year (good luck y’all!) but mostly life went right back to normal. Or worse than normal, because I landed back in school during our post-Spring Break testing season, which felt even more onerous and depressing this year. It lasted forever and took instructional technologies out of the classroom for testing purposes. All the visions of classroom-based performance assessments died as I watched students suffer through lame computer-based tests for over a month of the school year. Ah, reality. Thou sucketh.

But as I turn my eyes to the new school year, I don’t plan on giving up on my ideas for replacing our current high-stakes tests, although large systems are hard to budge.  I’ve heard that being a pioneer is hard, lonely work, and there is some truth to that from what I’ve experienced. I can only hope that I’m scouting towards a future that benefits my students (and yours). Stay tuned and keep those ideas coming.

Tags:

Think of something new and innovative that you are trying out in your classroom, school, or district.

Prove to me that it works.

Yep, I want you to stop reading this and think about some fancy new way that you have of educating and/or assessing students and tell me what evidence you have to prove that your new technique works.

Twice recently I’ve been faced with this demand. In the first instance, a teacher who was very excited about using portfolios after hearing my talk at NSTA15 in Chicago contacted me for help in convincing her science department to let her pilot the use of portfolios. She sent me a list of their questions that looked something like this:

1) Have you seen an increase/decrease on standardized test scores?

2) Have you seen an increase/decrease in student motivation?

3) Have you seen an increase/decrease in student competency?

A similar question popped up in the application packet for the PAEMST:

Provide evidence of your teaching effectiveness as measured by student achievement on school, district or state assessments, or other external indicators of student learning or achievement.

Here’s the problem: portfolio-based assessments like those that I employ are meant to be a replacement for standardized test scores. Portfolios are not just some labor-intensive test prep system. That would be like spending months training for a triathlon but instead finding yourself riding a mechanical bull for ten minutes. You could probably ride the bull a little better than if you hadn’t trained, but the bulk of your training would be lost on anyone watching you ride the mechanical bull (badly).

What then do you say to the science department questionnaire about the effectiveness of portfolios? What proof could I possibly provide about external indicators of student learning that could match the depth and quality of the portfolio assessments themselves? ACT data might be the closest thing to useful testing data that I see, but correlating achievement on ACT with pre- and post-portfolio implementation would be fraught with any number of the usual data snarls that we find when trying to compare different test takers from multiple school years.

We are then at an impasse. Those educators like myself that want to use portfolios for assessment will tout all the amazing things that you can observe in portfolios that you could not otherwise. Those who want to keep using standardized tests as the measuring stick for student and educator performances will decry the lack of a link between portfolios and achievement test scores.

I think that pretty soon we are going to have two different systems pop up across the country to accommodate these two assessment camps. One wing will be led by the testing juggernaut that stands to make a lot of money by continuing the current testing regime, but the other will be led by…..Kentucky? New Hampshire? Your guess is as good as mine, but I suspect (hope?) that sooner or later we’ll see some states piloting portfolios (again) as much needed replacements for the broken assessments that we currently use.

In the meantime, I hope that teachers like the one I mention above are allowed or even encouraged to try new ways of teaching and learning and that the burden of proof of effectiveness does not grind progress to a halt. New assessment systems require new systems of measurement. To expect more comprehensive forms of assessment such as portfolios to generate the same simple, supposedly comparable data as has been generated in the past is blatantly unfair to those willing to try something new.

 

 

Tags:

This weekend at the NSTA national meeting in Chicago I’ll be hosting a discussion about the use of portfolios as the keystone of new NGSS-centered district and state science assessments. Here are the slides I’ll use to start the discussion:

Exemplar portfolios can be found here

Please join the discussion if you can make it to the conference or leave a comment here to continue the discussion online.

Tags: , ,

What if the next generation of teacher accountability systems simply relied upon assessment of student performances?  You’re thinking: don’t we do that now? No, we don’t. In most cases, our current accountability systems of standardized tests are supposed to measure student learning, which is not the same as assessment.  Attempting to measure learning often leads to limiting ourselves to finding the best statistical models, crafting the best distractors, and determining cut-off scores. We should instead focus on finding ways to figure out what is happening in the classroom and how those learning activities engage students in performances of science and engineering. Isn’t that what taxpayers and parents really want to know: what’s going on in there?

I’m increasingly convinced that it is possible to assess and share a student’s performances of science and engineering without having to put a measurement (number/score/value) on that student’s work. Its pretty simple, and even excellent educational practice, to tell a student how to fix their mistakes rather than simply writing “72%” at the top of their assignment. This kind of assessment without measurement should be happening routinely in classrooms. Its also entirely possible to have this kind of assessment mindset when observing teachers for accountability purposes. Collections of student work, as in a portfolio, could be analyzed and areas of strengths and weaknesses identified and shared with the teacher and, perhaps, the public.

Four years ago I started using digital portfolios to assess student learning as a way to hold myself and my students accountable to a set of science performance standards that I knew my students were not achieving. It is not an amazing stretch of the imagination to picture a system in which such portfolios of student work are examined by the representatives of a state Department of Education to assess how I’m performing as a teacher. Unfortunately, the recent tragic history of accountability practices nationwide would suggest that, at least politically speaking, if an assessment system doesn’t generate numerical measurements of students, no one wants to touch it.

But why does the idea that we can measure  student learning burn so brightly in many Departments of Education?

To answer that, I think we have to look closely at what these so-called measurements of learning (state achievement tests) get us: they provide numbers that stand in for unquantifiable quantities, namely “knowledge” and “ability.” Some of the resulting numbers are bigger than others and thus provide a sense of easy comparisons between whatever the different numbers are attached to. Clearly, if I am buying a car, one that gets 40 mpg is superior to one that only gets 26 mpg. But is it fair or even appropriate to attach certain numbers to students, teachers, schools, school districts, or even states? What do numbers attached to a student even mean? Does a scale score of 400 on a state test mean that a student has learned less than one that earns 500?

Worse yet, what are those measurement comparisons used for? Lets examine my least-favorite use of educational measurement data: the real-estate market. We all know the real estate mantra: location, location, location.  When you look for a new house these days you can quite easily access information about the quality of the neighborhood in which the house is located.  Of course, school ratings are often thrust at potential buyers as a major indicator of the “right” neighborhood. Some of the newer realtor-oriented mobile apps sport new “schools” tabs that are clearly meant to add helpful data to your house-buying experience.

For science, let’s pretend to buy a home here in my town, La Junta, Colorado. In our case the community is composed of one neighborhood so all our school district data applies to the whole town. Here’s what we find out about my school district on some websites that you can easily find on your own (comments mine, but from a prospective buyer’s perspective):

School "rating"

Overall rating: 3 out of 10. Ouch. Better not buy a house here. These schools must suck.Screen Shot 2015-02-23 at 10.50.56 AMWait a minute…this school district was a 3 out of 10. These ACT test scores are right near state average, so shouldn’t the district rating be near a 5 out of 10? Maybe there’s more to it.

Screen Shot 2015-02-23 at 10.48.50 AM

Screen Shot 2015-02-23 at 10.46.53 AMHmmm, on second thought, maybe I don’t want to move here after all.  Maybe this educational environment deserves a 3 out of 10 if these are the kind of people my kid would go to school with. Why else would a realtor show me these numbers?

In reality, a combination of “educational environment” (whatever that means) and state testing scores (CSAP/TCAP) are what brings our magic number down to 3/10. Sure, the realtor sites add the caveat that we should check with the individual school districts to look at multiple measures of success, but as a simple, first look, a single measurement is sure easier to produce. And its misleading,  wrong, and easily manipulated.

And that’s just how numbers are used in the real estate business. The business of education sometimes uses those numbers in far more harmful ways. Look at any recent headline with the words “standardized test” and you’ll probably see some of the fallout from decades of so-called measurement of learning.

I don’t have the magic bullet to fix the national obsession with comparing apples and oranges, but if I did, it would look a lot like a portfolio-based collection of student work that could demonstrate not only students’ effort and learning but also the care and planning that teachers invested to help create an environment in which their students can thrive. That’s the kind of accountability system that I can get behind.

Tags: , ,

wpid-martin_freeman_s_stamp_of_approval__badass_ed___by_juliapopstar-d60p0xm-2015-01-17-08-46.jpg

This week I was asked by an administrator if I would like to go observe “some master Science teachers” in one of the big cities here in Colorado. I said yes. I’ll jump at any chance to see other science teachers in action, especially those that are in another school district.

But then I got to wondering about the phrasing of this offer, especially the bit about “master” teachers.

How does one earn the label of Master Teacher? Are these teachers self-identified experts at science teaching or is this a label granted by their administrators? Do their state science test scores blow my students’ scores away so that the state grants this title? What metric are we using here?

Of course, the obvious answer is that these may be National Board Certified folks. That seems to be the only metric that Colorado officially uses to determine if you are a master teacher. The NBCT site claims that “to date, 890 Colorado teachers have achieved National Board Certification.” I guess I find it kind of sad that out of all the teachers to ever teach in Colorado, only 890 of them are master teachers.

The subtext to the offer to visit another school is an interesting one, too. I teach in the only high school in a town of about 8000 people. The master teachers that I would be visiting work in schools in one of the big cities a few hours away. The folks at CDE who made us this offer clearly thought that teachers in the little school districts could benefit from seeing how its done in the cities. But is the teaching and learning that happens in big cities any more masterful than that happening out in the rural schools? Do we not have access to the same academic journals, blogs, and online networks of truly masterful teachers that they do? Shouldn’t they be visiting us instead?

I guess I am obsessing about titles and labels and the rural vs. urban socioeconomic dynamic here since I’ll be presenting at the National Science Teacher’s Association national meeting in Chicago in just a few weeks. I’ll attend sessions led by folks on the National Research Council and Achieve Inc. (the forces behind the NGSS) and surround myself with the high society of the nation’s science educators (and yes some functions at the conference require “evening attire”).

What sort of labels matter when science educators get together? I for one am sorely tempted to only seek out presenters with the label “current teacher” in their bio, because these are the folks who are most obviously trying to do right by their students on a daily basis. Likewise, I strongly suspect that there will be conference attendees who will look for certain credentials or affiliations after my name in the session listing and find them lacking.

In summary, I guess I would have been happier if this offer of a visitation simply asked if we wanted to meet and observe some fellow teachers in another school district. I still would have said yes, but without wondering whether someone was trying to compare my teaching skills with theirs. Who knows, maybe I will get to meet these master teachers and judge for myself. Maybe someday they’ll meet me and do likewise, but I probably still won’t be a Master Teacher, just a darn good one.

Image source: http://juliapopstar.deviantart.com/art/Martin-Freeman-s-Stamp-of-Approval-BADASS-ED-363964666

Tags:

« Older entries