Data analysis is so trendy these days that Brad Pitt is getting millions of people to sit through a movie about quantitative methodology. Moneyball, based on the 2003 bestseller by Michael Lewis, traces the rise of new methods that the Oakland A’s used to identify undervalued baseball players so the team could win more games with a smaller payroll. A lot of education reformers are calling for a similar approach to evaluate teachers and improve student performance. Given that I’m a longtime reformer and love baseball, you’d think I’d be all over this idea. But there are some significant strikes against a Moneyball approach to education.
(MORE: What Makes a School Great)
Poor data quality. In baseball, you can rely on the accuracy of a statistic such as a batting average or percent of at-bats a player gets on base. In education, we’ve seen an explosion of data and statistics during the past decade — it’s one of the quiet successes of No Child Left Behind. Good teaching can be magical, but that doesn’t mean it can’t be measured. We can differentiate among teachers in increasingly sophisticated ways. Unfortunately, while states are trying to do better, all the data being produced are not yet high-quality. In some states, for instance, standards for accuracy are lax or the data isn’t audited to check for errors. And just 14 states have standards about what a district should do to try to locate or figure out what happened to a departing student, according to the Data Quality Campaign, a national non-profit organization that has led the charge to improve state education data systems.
In addition, too many states have data systems that are inadequate or underutilized. According to the Data Quality Campaign, only 35 states are able to link student data to teacher data — and fewer states actually do this in practice, in no small part because it’s so politically contentious. And a lack of transparency plagues some states, where parents and other stakeholders cannot easily go online and find the data or use it to answer questions or learn about schools. What good is a lot of data if it’s difficult or impossible to use?
Lack of common definitions. In baseball, you get three strikes or four balls. And everyone agrees that a run is when a player crosses home plate. In education, we still don’t have common definitions about some really fundamental things. The Data Quality Campaign reports that just 18 states have a statewide definition of “teacher of record,” meaning that they bother to identify which adult or adults are responsible for teaching which students. It’s hard to track how good a job teachers are doing if you don’t know who’s doing what. Likewise, states and school districts can still game graduation rates by giving out more than one kind of diploma, and what constitutes “proficiency” for students at different points in their educational progress varies widely between states. If baseball teams could decide to have an extra out or two whenever they wanted or call a walk a hit when it was convenient, it would make games ridiculous. Yet that’s the reality in education today.
Little Respect For Evidence. On Oct. 9 the New York Times ran a front-page story highlighting how educational software vendors make a variety of claims that don’t stand up to scrutiny. They can do this because evidence in education is still BYOB (Bring Your Own Bluster). Research standards are frequently misunderstood or ignored, and the hodgepodge of state standards and tests makes it challenging to judge various claims. For all the disagreements about what matters most in baseball, there is an underlying consensus about its fundamental requirements for success — getting on base and scoring runs and preventing the other team from doing the same. In education, absent a consensus about what matters, faddishness prevails in public policy and the marketplace. Should we be teaching kids to systematically read and analyze content or to find things online when they need to? The answer depends who you ask.
(MORE: Rating Teachers: The Trouble With Value-Added Data)
There are great schools that get around these challenges and use data in creative ways to serve students better. The examples run from low-tech “data walls” that use 3 x 5 cards to track student progress to New York City’s “School of One” that can instantly tell whether a student is ahead or behind the average for similar students in mastering a specific standard or skill. But improving data quality and the underlying culture of data and evidence in education is key to creating an environment where Moneyball-like tools can make a broad difference across our school system, not just in isolated pockets.
That said, addressing the technical issues still only gets us halfway there. In any human endeavor, data alone are insufficient and must be balanced with training and judgment. Just ask this year’s Boston Red Sox. On paper, the Red Sox should have played deep into October with strong pitching and marquee players at multiple positions. Instead, the team broke down and failed to even make the playoffs. Unless you’re a player, it’s impossible to know exactly what happens inside a baseball clubhouse, but numerous reports indicate that team chemistry and other intangibles played a role in the collapse.
In other words, better metrics won’t relieve managers of the need to manage in the education world any more than they will on the baseball field. And data tools aren’t very useful if people don’t have the training and know-how to wield them. Figuring out how to balance number crunching and professional judgment is the hard work ahead for states and cities trying to develop new teacher evaluation systems and better ways of using data to hold schools accountable and improve them. The ultimate goal is a system that is genuinely customized and differentiated for students. But to get there, the education world needs to learn to hit singles before we can expect to hit a home run.
(MORE: Super Bowl School: What the NFL Can Teach Teachers)
Disclosure: Bellwether has done executive-placement work for the Data Quality Campaign.