In Classroom of Future, Stagnant Scores. Aka tech apparently not living up to promise (ie higher test scores)
First of all, I am glad that the New York Times is taking such a vested interest in writing about the state of American education and the ills and occasional triumphs that it experiences. They have a good track record in this department at a time when generally the America media is vilifying the practitioners (teachers) in a fairly dysfunctional system. They generally avoid such political bombast. In the article In Classroom of Future, Stagnant Scores by Matt Richtel, I felt as though, while attempting to provide a broad sweep of the effect of technology in the classroom, the occasional oversimplification, reliance on anecdotal evidence, and general disregard of scientific evidence paints this article as a bit of an agenda piece (at a time that is best to be avoided).
So rather than bemoan the darkness, I will attempt (respectfully) to extract a few of the sections that rang a bit false.
Anecdotal evidence, or filtering the macro through the micro experience
The Kyrene School was pointed to as an example of a school who has attempted to use technology more readily in the classroom (and presumably outside as well). The article dismisses these attempts as saying their test scores have not improved despite a general trend in the state upwards in terms of test scores. This a valid approach in determining the effectiveness of including the technology in the classroom, except that the data he is basing this effectiveness on is limited to 2006-2007 as 2005, presumably, would not produce a credible sample as the technology was being incorporated in that school year. It wouldn’t be a clean year of data.
So, based solely on 2006-2007, there is little indication that technology has improved test scores. This doesn’t account for a whole mess of variables, including teacher training (why would be we assume the teachers know how to teach with this stuff?), the level of familiarity the student has with technology (acumen with technology=extended contact time with technology), and the willingness of the district to revise curriculum to suit the pedagogical requirements and affordances of this new learning capacity (learning technology). Unfortunately, we keep trying to transport wholesale the traditional classroom based instructional process directly on to the elearning or even simply technology enhanced learning environments. They aren’t the same; sufficient evidence exists that the author could have pointed to. If higher education is struggling with incorporating technology into the learning experience, why would a poorer elementary school district in Arizona find it easier?
Most importantly, however, is the role of anecdotal evidence in journalism. This is misleading not because it establishes a context and an emotional core of the story, it does little to nothing to advance the thesis that the connection between technology and improved learning is murky. I think a broader discussion of all schools that received technology as a result of the grant compared to districts that didn’t might have been more effective.
What are we measuring exactly?
I don’t want to go off on a diatribe about standardized test scores as some parts of them are necessary. Accountability, for example. But buying a pencil doesn’t improve a test score. Technology doesn’t inherently lead to improved test scores. If that is the ultimate goal, then by all means pursue it. It just happens to be a dead-end. I support standards, but are we measuring literacy (and what kind? digital?), are we measuring mathematical aptitude (good thing), citizenship (why would a multiple choice test lend itself to patriotism?), writing and communication skills (better served online), phonics? For many of those, don’t bother with technology as there will be very little relation. Like saying an elevator in the school would improve physical education scores. Tools don’t inherently lead to acumen. But there is little to no possibility of developing acumen without the tools. Think of these tools like appendages; imagine how difficult it would be without them, even if you can’t play the piano.
Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
Really? No better way at all? Why not try to broaden the standardized tests (in lieu of scrapping them) to include “the breadth of skills that computers can develop.” Why not transitioning to a portfolio-based notion of knowledge construction where ongoing reflection, learning, even assessed activities become the subject of these state reviews. Think like a blog but one you keep throughout your educational existence, from K-12. At the very least, this type of collected reflection would be a counterpoint to a standardized test (and Americans, you need to know (and I am American) that we are test crazy. We need to let that loosen a little bit at least until we find the variables worthy of being measured in this new learning environment.). Before you disregard these notions, I urge you to run through some sample questions from Alaska used in grades 3-10. Ask yourself the following questions:
- Would I want my child’s educational future to be dictated by these tests?
- Would I feel comfortable, if I were a student, in having my educational potential be determined by taking this test?
- If I scored miserably on these tests (due not to competence, but to anxiety or any number of factors), would this stunt my potential future (academic, professional, or otherwise)?
So, the tests are fine; by all means, test what you want. By stop pretending that throwing a computer in a classroom is going to improve phonics scores, or multiplication tables, or even reading comprehension. It can assist in improving all three of these things with skillful facilitation and instruction. By not in and of itself. Long story short, throwing technology at a problem does not do anything. It needs to employ participatory design, spot pattern language, and gauged accordingly. And seriously, a two-year window of evaluation? A little unreasonable, no?
The lack of scientific evidence
A review by the Education Department in 2009 of research on online courses — which more than one million K-12 students are taking — found that few rigorous studies had been done and that policy makers “lack scientific evidence” of their effectiveness. A division of the Education Department that rates classroom curriculums has found that much educational software is not an improvement over textbooks.
This is again, an apples and oranges comparison. These are things and things without skillful facilitation do not go very far. The elevator and the physical education test scores analogy again. The “few rigorous studies” that have been done, I am guessing, have been specific enough to a locale that they might not scale well across state or national levels. But at the end of the day, I would not necessarily agree with the author here saying that the results are inconclusive, but would argue that perhaps a redistribution of funds towards teacher training and away from actual technology acquisition might be in order. Even a 50/50 split would be in order. Otherwise, they are like the neighbor’s Camaro on the front-lawn. High-priced junk that smacks of failure. I would caution the author against pointing the finger at online courses. How many rigorous studies have pursued the effectiveness of face to face instruction as the optimal mode of instruction? In comparison to what? I think there is a degree of legacy conflating the notion of legitimacy.
I suspect that the author is building a bit on the recent Pew research study on the perception of value of online learning, which has college presidents thinking online learning provides an equal value to face to face instruction (by a slim majority) as opposed to only 29% of the general public thinking they do. Pew did a good job with this study, but generally I am dismayed by the desire of Americans (some) to give opinions on subjects they have yet to experience. Like being interviewed of what you thought of a book before you read it. The vast, vast majority of Americans have never taken an online course yet feel compelled to provide an opinion on it. Where is the comparison? Granted, the Pew study deals specifically with US higher education, but one could suspect that these confidence percentages would continue to skew negatively as one approached elementary schools. So, in short, there is evidence; just not the silver bullet that says online learning is as good or better as face to face instruction. There should never be a study that comes out and says that as they are not the same thing. Either way, I dare you to take an online course (a full one) and walk away thinking they are inferior.
Which leads me to my next point.
More efficient delivery device
This is where the Kyrene School District example actually works. They gobble up students because they are perceived as being innovative (ie, basic economics). And this is not a bad thing for Arizona. In fact, one could argue that this is exactly what was envisioned when this was first proposed. The physical, face to face delivery system of education is a drastically costly exercise. One that obviously has been producing less than desired results. We know that an online program is just as costly as a face to face program (or at least in the ballpark of being equivalent); however, there is little to suggest that if implemented on scale, this would still hold true. Labor costs are still the highest cost for most districts. That won’t change with an online model. What will change is the need for physical overhead. Structures, buildings, electricity, etc. These, while not being removed altogether, can be consolidated. Districts can and should shrink their physical space. Transform the physical classroom into either the reflective space or the space for collaborating on group projects (for portfolios, not standardized tests).
However, as this is the realist in me talking, it ultimately won’t matter if online education is or is perceived as being an inferior mode of education. It is just simply a more cost-efficient (on scale) delivery device for education. Trains are nicer; planes are faster. Online education is faster, generally quicker to pivot, and less overhead intensive (after that initial and highly costly investment). Decentralizing education in terms of online instruction coupled with face to face instruction, while a bumpy process for sure, actually retools students for skills that might be the dominant factor in the American economy for many years to come. Namely everyone is a freelancer or a company of one. Only with that mindset will assertiveness of one’s learning begin to reveal itself. That is the underlying engine of this transformation in society. The safety of large structures (corporations) is suspect. Safety is in aggressively pursuing one’s own learning. Starting that early and often is the retooling we need in education, but it isn’t immediately quantifiable. There are differences in what we need as a society and what we ask for. Educational leadership needs to know that and respond accordingly (and often with a lack of popularity). Perception thinks that online education is inferior? That isn’t a death sentence; it’s a time for reasonable, valid persuasion. But do it quickly. The States are running out of money so the remorseless sweep of economic realities might make this decision for us.
In short, technology is a tool, a necessary one, but this is still a human enterprise. Train the teachers, empower the learners, and let technology augment that impact. This is where America can reinvent itself. Right here. Just not like this. And, data is great as long as it is pulled from the last 4 years. Stopping at 2007 and calling it a trend seems misleading.