I have sat through many professional development sessions where I’ve been bored out of my skull. There was the one about gloves and writing a five-paragraph essay that gained me nothing more than a single gardening glove with some sharpie scribbles on it. There were the several sessions where I walked away with a giant binder that I never opened again. After several of these useless meetings, one of my colleagues and I started using the acronym “JJT,” which of course stands for Job Justification Theatre.

There is a little JJT in every PD session, to be sure, but sometimes the PD really did justify the job that was being done, especially when the training happened to be useful for all the staff in the building. Maybe it helped impart a shared vision of expectations or served as a sounding board for staff concerns.  Sometimes we even (gasp!) created the PD ourselves and had staff members run it.

Our building’s approach to PD this year has been wildly different, however. The emphasis, perhaps in reaction the the excess painful PD seat-time of previous years, has swung to individualized PD. Everyone gets what they need, as long as they find it themselves. Gone are the district-wide (insert edu-brand name here) trainings and other randomly generated PD topic sessions. This is the era of do-it-yourself PD.

Which is not a bad thing, unless we all travel our separate ways and never meet again.

And that’s why I’ve learned to love meetings. In meetings I get to chat with staff members from other departments to see what is working for them and what they are thinking about. Sometimes its like what I’m doing and we compare notes. Other times they are off on a radical new path that I need to know more about. But I wouldn’t have known about it by simply following my own interests.

I totally understand that the science department is going to have some different-looking PD from social studies and that those types of trainings will be department-specific to be really useful. I don’t necessarily want to learn about historical criticism.

But there is a place for creating and updating a vision for education that transcends individual topic area departments in a high school. How do we understand how kids learn and how can we adapt instruction accordingly? How do we provide quality feedback on student work? And my recent crusade: how do we assess and grade students in a way that is fair and doesn’t penalize them for the speed (or lack thereof) at which they learn?

These are topics that one person or department cannot simply address in a useful way without bouncing ideas off of the entire building. Believe me, I’ve been reforming grading practices in my classroom for years but those weird ideas have yet to gain much traction with many other faculty here. I’ve had more discussions with folks here on the blog and on Twitter than in my physical reality.

So please, take some time to meet with your faculty and brainstorm on topics that you all need to address as a staff. I’m going to make that my goal. Its time to schedule some meetings.


Fanboy Ludwig with Neil Shubin at NSTA15

Fanboy Ludwig with Neil Shubin at NSTA15

Ever have one of those “mountaintop” experiences or events that at the time feel so important and life-changing that you wonder if you’ll be the same person on the other side of it? And then did you come down off the mountain and get back in your proverbial or literal car and return to your regular life? I think we’ve all been there a few times, perhaps at summer camp, a mission trip, or the tent revival at the local church.

For me, the latest such mountaintop experience was getting to attend and present at the national NSTA meeting in Chicago this past March. Seeing my name in the program just a few pages away from Neil Shubin and Bill Nye practically qualified me for rock-star status, at least on paper. The conference was amazing, as you’d expect, and I had a wonderful time giving my talk. The folks that came to see me (the mile walk!) were amazing and included a lot of great Twitter friends who stuck around to chat and make connections afterward.

I left NSTA15 feeling like I was on the right track. I’d presented at a national conference and didn’t make a fool of myself.  Many teachers seemed inspired by my ideas. Every talk I went to about the NGSS pointed towards needing new ways to assess student performances of science, which my portfolio-based assessment system clearly does. From all that I saw there, I was on the leading Edge of thinking about new ways to collect, analyze, and share meaningful data about what students know and can do.

I’m not sure what I expected to happen post-conference, but it basically didn’t play out as expected. I didn’t see the major leaders behind NGSS express any interest in portfolio assessments, nor did I hear any encouraging news on that front from Arne Duncan. My Twitter stream continued to be the flood of info that it used to be, but, aside from a few high quality interactions, it felt more stale and repetitive than usual. I was not really greeted as a local hero upon my return to my little town, save for a few close friends. In fact, I have yet to be invited to share my work with the district staff, most of whom know very little about what I do.

Now in all fairness, none of these things would ever realistically have happened. Much of the push for NGSS is linked to companies who want to sell us more tests, so change to new assessment types will be slow on that front. Twitter is a hot mess of the good, the bad, and the ugly even on a good day with amazing connections like I have. My local district was in the middle of incredible political turmoil with a witch hunt targeting the current (now former) superintendent, so a little side-show theater like mine would hardly draw an audience.

Bottom line, I came down off the mountain pretty hard. I did some consulting with a few folks who are trying out portfolios this coming school year (good luck y’all!) but mostly life went right back to normal. Or worse than normal, because I landed back in school during our post-Spring Break testing season, which felt even more onerous and depressing this year. It lasted forever and took instructional technologies out of the classroom for testing purposes. All the visions of classroom-based performance assessments died as I watched students suffer through lame computer-based tests for over a month of the school year. Ah, reality. Thou sucketh.

But as I turn my eyes to the new school year, I don’t plan on giving up on my ideas for replacing our current high-stakes tests, although large systems are hard to budge.  I’ve heard that being a pioneer is hard, lonely work, and there is some truth to that from what I’ve experienced. I can only hope that I’m scouting towards a future that benefits my students (and yours). Stay tuned and keep those ideas coming.


Think of something new and innovative that you are trying out in your classroom, school, or district.

Prove to me that it works.

Yep, I want you to stop reading this and think about some fancy new way that you have of educating and/or assessing students and tell me what evidence you have to prove that your new technique works.

Twice recently I’ve been faced with this demand. In the first instance, a teacher who was very excited about using portfolios after hearing my talk at NSTA15 in Chicago contacted me for help in convincing her science department to let her pilot the use of portfolios. She sent me a list of their questions that looked something like this:

1) Have you seen an increase/decrease on standardized test scores?

2) Have you seen an increase/decrease in student motivation?

3) Have you seen an increase/decrease in student competency?

A similar question popped up in the application packet for the PAEMST:

Provide evidence of your teaching effectiveness as measured by student achievement on school, district or state assessments, or other external indicators of student learning or achievement.

Here’s the problem: portfolio-based assessments like those that I employ are meant to be a replacement for standardized test scores. Portfolios are not just some labor-intensive test prep system. That would be like spending months training for a triathlon but instead finding yourself riding a mechanical bull for ten minutes. You could probably ride the bull a little better than if you hadn’t trained, but the bulk of your training would be lost on anyone watching you ride the mechanical bull (badly).

What then do you say to the science department questionnaire about the effectiveness of portfolios? What proof could I possibly provide about external indicators of student learning that could match the depth and quality of the portfolio assessments themselves? ACT data might be the closest thing to useful testing data that I see, but correlating achievement on ACT with pre- and post-portfolio implementation would be fraught with any number of the usual data snarls that we find when trying to compare different test takers from multiple school years.

We are then at an impasse. Those educators like myself that want to use portfolios for assessment will tout all the amazing things that you can observe in portfolios that you could not otherwise. Those who want to keep using standardized tests as the measuring stick for student and educator performances will decry the lack of a link between portfolios and achievement test scores.

I think that pretty soon we are going to have two different systems pop up across the country to accommodate these two assessment camps. One wing will be led by the testing juggernaut that stands to make a lot of money by continuing the current testing regime, but the other will be led by…..Kentucky? New Hampshire? Your guess is as good as mine, but I suspect (hope?) that sooner or later we’ll see some states piloting portfolios (again) as much needed replacements for the broken assessments that we currently use.

In the meantime, I hope that teachers like the one I mention above are allowed or even encouraged to try new ways of teaching and learning and that the burden of proof of effectiveness does not grind progress to a halt. New assessment systems require new systems of measurement. To expect more comprehensive forms of assessment such as portfolios to generate the same simple, supposedly comparable data as has been generated in the past is blatantly unfair to those willing to try something new.




This weekend at the NSTA national meeting in Chicago I’ll be hosting a discussion about the use of portfolios as the keystone of new NGSS-centered district and state science assessments. Here are the slides I’ll use to start the discussion:

Exemplar portfolios can be found here

Please join the discussion if you can make it to the conference or leave a comment here to continue the discussion online.

Tags: , ,

What if the next generation of teacher accountability systems simply relied upon assessment of student performances?  You’re thinking: don’t we do that now? No, we don’t. In most cases, our current accountability systems of standardized tests are supposed to measure student learning, which is not the same as assessment.  Attempting to measure learning often leads to limiting ourselves to finding the best statistical models, crafting the best distractors, and determining cut-off scores. We should instead focus on finding ways to figure out what is happening in the classroom and how those learning activities engage students in performances of science and engineering. Isn’t that what taxpayers and parents really want to know: what’s going on in there?

I’m increasingly convinced that it is possible to assess and share a student’s performances of science and engineering without having to put a measurement (number/score/value) on that student’s work. Its pretty simple, and even excellent educational practice, to tell a student how to fix their mistakes rather than simply writing “72%” at the top of their assignment. This kind of assessment without measurement should be happening routinely in classrooms. Its also entirely possible to have this kind of assessment mindset when observing teachers for accountability purposes. Collections of student work, as in a portfolio, could be analyzed and areas of strengths and weaknesses identified and shared with the teacher and, perhaps, the public.

Four years ago I started using digital portfolios to assess student learning as a way to hold myself and my students accountable to a set of science performance standards that I knew my students were not achieving. It is not an amazing stretch of the imagination to picture a system in which such portfolios of student work are examined by the representatives of a state Department of Education to assess how I’m performing as a teacher. Unfortunately, the recent tragic history of accountability practices nationwide would suggest that, at least politically speaking, if an assessment system doesn’t generate numerical measurements of students, no one wants to touch it.

But why does the idea that we can measure  student learning burn so brightly in many Departments of Education?

To answer that, I think we have to look closely at what these so-called measurements of learning (state achievement tests) get us: they provide numbers that stand in for unquantifiable quantities, namely “knowledge” and “ability.” Some of the resulting numbers are bigger than others and thus provide a sense of easy comparisons between whatever the different numbers are attached to. Clearly, if I am buying a car, one that gets 40 mpg is superior to one that only gets 26 mpg. But is it fair or even appropriate to attach certain numbers to students, teachers, schools, school districts, or even states? What do numbers attached to a student even mean? Does a scale score of 400 on a state test mean that a student has learned less than one that earns 500?

Worse yet, what are those measurement comparisons used for? Lets examine my least-favorite use of educational measurement data: the real-estate market. We all know the real estate mantra: location, location, location.  When you look for a new house these days you can quite easily access information about the quality of the neighborhood in which the house is located.  Of course, school ratings are often thrust at potential buyers as a major indicator of the “right” neighborhood. Some of the newer realtor-oriented mobile apps sport new “schools” tabs that are clearly meant to add helpful data to your house-buying experience.

For science, let’s pretend to buy a home here in my town, La Junta, Colorado. In our case the community is composed of one neighborhood so all our school district data applies to the whole town. Here’s what we find out about my school district on some websites that you can easily find on your own (comments mine, but from a prospective buyer’s perspective):

School "rating"

Overall rating: 3 out of 10. Ouch. Better not buy a house here. These schools must suck.Screen Shot 2015-02-23 at 10.50.56 AMWait a minute…this school district was a 3 out of 10. These ACT test scores are right near state average, so shouldn’t the district rating be near a 5 out of 10? Maybe there’s more to it.

Screen Shot 2015-02-23 at 10.48.50 AM

Screen Shot 2015-02-23 at 10.46.53 AMHmmm, on second thought, maybe I don’t want to move here after all.  Maybe this educational environment deserves a 3 out of 10 if these are the kind of people my kid would go to school with. Why else would a realtor show me these numbers?

In reality, a combination of “educational environment” (whatever that means) and state testing scores (CSAP/TCAP) are what brings our magic number down to 3/10. Sure, the realtor sites add the caveat that we should check with the individual school districts to look at multiple measures of success, but as a simple, first look, a single measurement is sure easier to produce. And its misleading,  wrong, and easily manipulated.

And that’s just how numbers are used in the real estate business. The business of education sometimes uses those numbers in far more harmful ways. Look at any recent headline with the words “standardized test” and you’ll probably see some of the fallout from decades of so-called measurement of learning.

I don’t have the magic bullet to fix the national obsession with comparing apples and oranges, but if I did, it would look a lot like a portfolio-based collection of student work that could demonstrate not only students’ effort and learning but also the care and planning that teachers invested to help create an environment in which their students can thrive. That’s the kind of accountability system that I can get behind.

Tags: , ,


This week I was asked by an administrator if I would like to go observe “some master Science teachers” in one of the big cities here in Colorado. I said yes. I’ll jump at any chance to see other science teachers in action, especially those that are in another school district.

But then I got to wondering about the phrasing of this offer, especially the bit about “master” teachers.

How does one earn the label of Master Teacher? Are these teachers self-identified experts at science teaching or is this a label granted by their administrators? Do their state science test scores blow my students’ scores away so that the state grants this title? What metric are we using here?

Of course, the obvious answer is that these may be National Board Certified folks. That seems to be the only metric that Colorado officially uses to determine if you are a master teacher. The NBCT site claims that “to date, 890 Colorado teachers have achieved National Board Certification.” I guess I find it kind of sad that out of all the teachers to ever teach in Colorado, only 890 of them are master teachers.

The subtext to the offer to visit another school is an interesting one, too. I teach in the only high school in a town of about 8000 people. The master teachers that I would be visiting work in schools in one of the big cities a few hours away. The folks at CDE who made us this offer clearly thought that teachers in the little school districts could benefit from seeing how its done in the cities. But is the teaching and learning that happens in big cities any more masterful than that happening out in the rural schools? Do we not have access to the same academic journals, blogs, and online networks of truly masterful teachers that they do? Shouldn’t they be visiting us instead?

I guess I am obsessing about titles and labels and the rural vs. urban socioeconomic dynamic here since I’ll be presenting at the National Science Teacher’s Association national meeting in Chicago in just a few weeks. I’ll attend sessions led by folks on the National Research Council and Achieve Inc. (the forces behind the NGSS) and surround myself with the high society of the nation’s science educators (and yes some functions at the conference require “evening attire”).

What sort of labels matter when science educators get together? I for one am sorely tempted to only seek out presenters with the label “current teacher” in their bio, because these are the folks who are most obviously trying to do right by their students on a daily basis. Likewise, I strongly suspect that there will be conference attendees who will look for certain credentials or affiliations after my name in the session listing and find them lacking.

In summary, I guess I would have been happier if this offer of a visitation simply asked if we wanted to meet and observe some fellow teachers in another school district. I still would have said yes, but without wondering whether someone was trying to compare my teaching skills with theirs. Who knows, maybe I will get to meet these master teachers and judge for myself. Maybe someday they’ll meet me and do likewise, but I probably still won’t be a Master Teacher, just a darn good one.

Image source: http://juliapopstar.deviantart.com/art/Martin-Freeman-s-Stamp-of-Approval-BADASS-ED-363964666


Previously in this space I wondered about my sanity plans for continuing to allow students to more or less run their physics class as an open workshop or maker-space. As it turns out, I did indeed decide to continue the student-designed format for this class for two main reasons. First of all, this year the physics class got scheduled for 7th hour, which is at the end of the day when students are at their most brain-dead and need to be up and moving around. The second and perhaps more important reason is that I knew most of the students coming in to the class in August and by looking at the roster, I could guess that a traditional math-based physics curriculum was going to flop. That’s no slam on these kids, they’ve got lots of talent, but I recognized that to try to do a more traditional physics prep at the end of the school day with this particular group of students would be a waste of their time.

Fortunately for all concerned, the choice to let students use their time in my class as they see fit has paid off quite well so far this year. The Phun6 students formed teams and have pursued several projects of their own choosing. However, this group has been more private in their sharing of their projects compared to the last two Phunsics classes. Instead of creating a public-facing blog, they chose to make a private Facebook Phun6 group where they’ve been updating each other and myself as to what they have accomplished.

Since there’s a bit of mystery surrounding this year’s Phun6 team, I thought I’d use this post to bring you up to speed on some of the projects that they’ve tackled this year so far:

-We have resurrected and refurbished the potato cannon from Phunsics 2011:


-We have created a pendulum wave machine:

Screenshot 2014-12-20 13.32.25Screenshot 2014-12-20 13.32.54

-We created some conductive and nonconductive squishy circuit dough and built circuits:

Screenshot 2014-12-20 13.36.34

-We built model rockets and tried out a two-stage design:


-We built V2.0 of the Phunsics quadcopter from scratch, got all 4 motors to spin up, and even got it to hop on a short flight before we killed one of the motors and most of the propeller blades:


-Several Phunsics alumni from School of Mines stopped by to check out the quadcopter and lend a hand in determining our next steps to get it flying:


-We built and launched a rocket-powered car (prototype shown):

-We constructed a Reuben’s Tube for visualizing music and sound waves with fire :


Here’s a Vine of some of the pitches tested.

-We modified the flame tube setup to run two tubes simultaneously, one for bass and one for treble. While very impressive, the bass tube pressures kept blowing out our flames:


-We are nearly finished building an air-powered marshmallow gun:

Screenshot 2014-12-20 14.17.40

-We produced a video of the “12 Days of Physics” documenting our adventures (and mishaps) during the class so far:


We might publish the video to YouTube when we get back to school in January or maybe we’ll decide to spare you the exposure to our out-of-tune guitars and amateur singing.

Looking ahead to next semester, one of the groups has plans to help a wheelchair-bound friend of ours by designing an attachment that will make it easier to connect a regular wheelchair to a power-assist Firefly unit. That project is just getting started, but has the potential to be really useful to someone and might even lead to some limited commercialization of the product if we do a decent enough job at it.

Thanks for reading, and stay tuned for what they manage to dream up next semester!



Free speech. Freedom of religion. Freedom to bear arms. Free access to your school’s WiFi network. We hold these truths to be self-evident.


Until the tech department changes the passwords, that is.

At my school, students had grown used to a very generous Bring Your Own Device atmosphere that had built up over several years. I suppose most students had their phones on the school network and I was starting to see a sprinkling of individually-owned iPad minis, other tablets, and the occasional PC laptop appear in class. This was accomplished by having a Guest network available through the school and most if not all students had the user name and password that the technology department had freely circulated for their use.

But how were they using this access? According to a recent conversation with our tech department folks, the vast majority of traffic on the school-provided WiFi was to YouTube and Facebook. The assumption, and probably an accurate one, is that most of the bandwidth being slurped up by the BYOD crowd was for non-academic purposes. So the tech department decided to do something about it. Their first step was to change the Guest network login username and password and to not give it out to students.

But those crazy kids knew a couple of the other WiFi network passwords too, either through divine intervention or the fact that they were friends with the student tech interns over the past few years. The technology staff report walking into classrooms and seeing some of the not-so-secure network passwords scribbled on teacher whiteboards. See where this is going? If you are a network admin, you do.

If you are a network admin or keep an eye on such things, you know that network (IP) addresses for computers consist of 4 numbers (ex: where the last two numbers are the subnet (11) and the individual device (3). It turns out that each subnet can only dish out 255 addresses, for some arcane reason. This limits the number of devices that can be on one subnet to no more than 255, and usually less.

Now when all of our BYOD devices, which was pretty much every cell phone in the building, hopped on the same WiFi network, what do you suppose happened? Thats right, we ran out of addresses. Now it was personal because the network that everyone had hopped on was one that all my Macs and iPads were on, because Apple stuff like AirPlay and Bonjour loves to be on the same subnet. But having one subnet means only 250 or so devices, and now every kid in the building was snagging those IP addresses. Major network crash, right in the middle of some test prep that my students were trying to do on the Macs. Not pretty.

Our response? Change all the passwords. Quite logical, actually. Now only school-owned devices can connect to the school’s WiFi network. There seem to be no more connection problems and the speed of the network seems faster, but that could be my imagination.

Was this the right call, kicking every BYOD off of the school network? I’m not sure.

I totally understand why it happened the way it did and I get the argument about network connectivity as a limited resource. But if your students are like mine, and like the individual who drew the image above, access to WiFi ranks way up there on the list of basic needs. Lots of the YouTube traffic that I saw from my students was happening in the background as music that played while they worked on school-related stuff. Many teachers in the building report multiple instances of cell phones being used on a routine basis for academic purposes. Is it fair to now force students to use up their data plans for learning activities while school-provided WiFi lurks just out of reach? Is the local coffee shop now a more welcoming place to learn because they provide WiFi?

I think there are some positive aspects of BYOD, but right now we’re clunking around our implementation of it. How’s it work in your school? What solutions have you seen work? I’d love to hear your thoughts.

Tags: , , ,

My wondering for the week is this: should I start grading students on their assessment portfolios from the very beginning of the year rather than wait for the 1st quarter marking period? But if assessment by portfolio starts from day one, is it fair to enter an F grade for everyone at the beginning of the year because their portfolio would be empty? Since I strongly suspect that it is not fair to grade an empty portfolio for the first few weeks of school, when is a good time to switch from purely formative assessment of blogs to the more summative assessment of the portfolio?

Currently my students start off the year with a basic technology boot camp and the establishment of their own individual blogs. We spend a good chunk of the first few weeks learning to blog (most haven’t before) and getting used to the new normal that is the blended mashup of learning that is my classroom. I give a speech or two about how we don’t use numerical points towards earning letter grades, but instead will provide evidence of our learning in other ways.

At some point, usually around four weeks into the school year, students finally create their Google Sites assessment portfolio from the template that I’ve given them. They share the portfolio address with me, but that’s usually all that happens with the portfolio for several weeks.

But as the end of the quarter approaches, there is a need to begin to fill the portfolio with artifacts of learning, as that is the assessment tool by which quarter (and semester) grades will be determined. In theory, the portfolio should not be a lot of extra work for students because it involves very little new writing and creating, simply sorting and linking assignments and evidence that have already been completed. Therefore this task should be greeted with joy and happiness.

Hmmm. What I see instead is that a small minority of students grab onto the portfolio concept early on and fill it up as they go along through the class: blog post gets published, blog post gets sorted into the portfolio. But the other 80-90% of students do not touch it. Call it avoidance of failure, call it unfamiliarity, maybe throw in some technophobia, and portfolio building does not happen spontaneously for most students until the portfolio becomes the basis for the course grade.


Now, keep in mind that students have been getting an “eligibility” grade from me from the first weeks of school, so being given a letter grade is nothing new for my classes. At the beginning of the year I tend to grade in a pass/fail manner as I have not yet gathered enough information from just a few assignments to really tell an overall picture about a student’s performance. After a time, probably by the 3rd or 4th week of school, I do start guessing at letter grades besides P and F based on the quality and quantity of work that I am seeing published to their blog.

A more accurate letter grade doesn’t get assigned, however, until I feel that I have provided students with enough chances to be successful in each of the major course standards, and that may not occur until right before the 1st quarter grade (or if we are talking AP Biology, until AFTER the 1st quarter is over). At that point I begin to start looking at what students are putting into their portfolios.

But now we have a perfect storm of factors come together: grades are due in a week or two, many students are behind in their blogging, most students have not bothered to figure out how to operate Google Sites because there has been no need to until now, the portfolio is empty, and I feel the need to (finally) show students what their grades will be like once I apply the published guidelines for assessing the portfolio for midterm and final grades. Ready or not, its portfolio time.

So I devoted around a week of class time for students to work on very clearly specified pages of the portfolio and provided what I thought was very specific and often one-on-one instruction on how to post links to the portfolio in Google Sites. But when I went to grade the portfolios last weekend, a week before grades were due, many were still very incomplete or even empty of evidence.

To say that by grading these empty portfolios I filled up the entire eligibility list would be incorrect, but not far from it. I gave a lot of F’s to a lot of good students.

Then, and only then, with a failing grade in hand, did I have students come to me for help in upgrading their portfolio. It was a very busy, but incredibly productive week after grades based on the portfolio were published.

Now back to my original question: when during the first quarter should I make the transition to summative grading based on the portfolio? Is this the only way to do it, with a week of panic right before the end of the quarter? Should I try picking up a grade from the portfolios somewhere in the middle of the quarter even though many of the lab standards might only have a lab or two as possible proof? How about at the beginning of the quarter even though there have been no chances to publish any work? Yikes. Try explaining initial grades of F to parents and coaches. But I bet I’d see students understand the portfolio concept better if we were using it from Day 1.

I suppose I might try evaluating the portfolio (U, PP, P, A ratings per standard) from the first build onwards without grading the portfolio (A, B, C, etc). I might at least get a few more students interested in working on the portfolio as we go, since parents can see those evaluation ratings on Infinte Campus. The eligibility grade could still be pass/fail for a while at the beginning of the year and then move into a real letter grade based on the portfolio once students have had enough chances to fill it. There might be some questions towards the beginning of the year about how a kid is passing but has all U’s, but that is probably easier to deal with than failing the kid for work they haven’t even been assigned yet.

TL; DR: Kids will procrastinate until a grade is assigned. Start assessment based on portfolios earlier so that students have a better chance of being successful on the first midterm grade.


This is a cautionary tale about what happens when educational technology fails. Of course, tech breaking down is nothing new, but reliance on technology in a 1:1 learning environment introduces some complications that you might not have thought about.

Not so long ago, I would have ranked myself up there on the list of folks who knew how to do educational technology pretty well. I had managed to score a cart of MacBooks so that each student could be guaranteed 1:1 access during class time at least, and I went about implementing some fancy new strategies like student blogging. I even got tagged to write an article in Edutopia about it in 2010, so I was doing something at least marginally interesting with technology at the time.

A year later I was able to convince my tech coordinator and principal that I could really use a class set of iPads since my hardcopy Anatomy textbooks were falling apart and there were some new apps appearing that would let my students get their textbooks electronically. When the district bought some new iPad2’s, I snagged a class set and went on to check them out to students in my own mini version of a 1:1 program for about 40 students at a time. We used the Inkling app to buy 30 copies of Hole’s Anatomy on the iPads and shelved the old textbooks, definitely a win in most #edtech circles.

Fast forward to this school year, 2014-15, in which I’ll need to use the same laptops that’ve been around in my room since 2008, but now add in the fact that our technology coordinator spent the summer closing up shop as he left for a new (less stressful and well deserved) job so no new technology hardware purchases were made nor any older units repaired. This means that several macs that I sent in for a missing key or sticky mousepad are now completely AWOL, as are several iPads that needed minor repairs, as are multiple batteries from the Macs (the removable variety) since I pulled several for disposal at the beginning of the summer.

I should also mention that I have 46 Anatomy students but only 38 working iPads and only 30 Inkling textbook licenses, so the tradition of loaning an iPad to every Anatomy student ended this year.

This is when I realize how spoiled I’ve been. I have always been able to get all the technology that I felt that students needed to learn in a “modern” classroom. But now that a lot of that technology is powerless (literally) to help my students, what’s a tech-nerd to do without technology? How does a very functional 1:1 implementation carry on when it is no longer 1:1?

We’re going old-school, of course. My Juniors and Seniors in the Anatomy class are coloring. ON PAPER (using study guide packets, and, yes, I see some irony in that after slamming packets in a previous post). We’re using a hardcopy textbook again. Its from 2004 and most are falling apart in some way.

But here’s the fun part: I think coloring diagrams of the human body has a place in an anatomy course, and I forgot that in my quest for the latest gadgets. I think students poring over a list of terms and deciding their locations in a particular type of tissue, organ, or system has a lot of merit as a learning tool. Its not that the iPad can’t do that, but I honestly rarely saw my students using the iPads that way. If you give a student a handout to help them learn about the human body, there is pretty much only one use for that handout, but if you give them an iPad, it gets a little more complicated. If you were a student with an iPad, would you choose to read an anatomy text on it instead of using one of the other thousands of apps that it could run? Maybe, if the teacher forced you to, but I never did think that forcing kids into certain apps was the best use of iPads, which meant that our fancy Inkling app textbooks went largely unused, I’m sorry to say.

This year represents a chance to take a much lower-tech approach to teaching Anatomy, a return to how I used to teach it in some ways. Oh, I’ll still use technology for the class. In fact, we’ve already got our blogs set up and will eventually set up our assessment portfolios online too, as we’ve done for the past few years. The only difference might be in the kinds of artifacts we post there. Expect to see some more coloring, and, who knows, maybe some better test scores as well.


Tags: , , , ,

« Older entries