Think of it as a sort of midlife crisis for an entire profession.
Education researchers are wondering if the work they do is making enough of a difference in real schools and districts. In national commissions, networks, and discussion groups, they are increasingly asking whether it can be made more usable.
A scholarly study, for instance, may be fine for publishing in a journal, but chances are few teachers or principals will ever use it for guidance. And while research might document a successful innovation that works in one school, the same practice is often hard to transplant to another.
“One of the big problems in educational research is that people haven’t understood the need to take research one step further and translate it to usable knowledge,” said Ellen Condliffe Lagemann, the dean of Harvard University’s graduate school of education. “Medicine has for a while had something called ‘translational knowledge,’ and that’s what we need to do in education.”
Ms. Lagemann, one of the leading voices in the movement for more usable education scholarship, crusaded for the idea when she was the president of the Spencer Foundation, a Chicago-based philanthropy that supports educational research, including coverage of the subject in Education Week.
She’s had plenty of company among prominent academics, foundation officials, and members of national panels, many of whom know from experience that detours and potholes inevitably line the road from research to practice. In fact, they are discovering, the road isn’t straight. It’s more like a traffic circle, through which practitioners feed into the work of the researchers, and vice versa.
This kind of thinking, however, comes at a time when the federal government and members of Congress are espousing a different message for educational research. Their aim is to transform education into an “evidence-based practice” by making it more scientific.
That movement concerns some proponents of “usability,” who fear that the focus on scientific experimentation could crowd out federal support for other kinds of research and development that might help practitioners. Most experts in the field agree, though, that to be effective, education research needs to be both credible and usable.
“Addressing one well does not ensure that the other is addressed well,” said M. Suzanne Donovan, the associate director of a National Research Council committee that is drawing up a plan for a new, national infrastructure aimed at producing useful education knowledge. “But they’re not mutually exclusive.”
When researchers start out their careers, they sometimes imagine that the nuggets of truth they sift out from their investigations will flow directly into the hands of practitioners who, in turn, will use that information to improve what they do. That rarely happens in any field, said Thomas K. Glennan Jr., a senior adviser for education policy in the Washington office of the RAND Corp., an influential think tank.
A ‘disconnect’
“What happens is research finds its way into some materials or products or protocols that are used, so that when you fly an aircraft, there is surely research associated with the design of the airfoils or whatever,” said Mr. Glennan, who once headed the now-defunct National Institute of Education, the federal government’s first national education research agency.
“The research is important,” he said, “but it doesn’t drive it. It’s the desire to solve some problem that drives the creation of products or materials.”
In contrast, sheer intellectual curiosity is often what drives researchers’ investigative choices in education. What’s more, those studies are mostly hatched in university offices and laboratories far from the K-12 classrooms where the work of teaching gets done.
“In medicine, most of the research is done in hospitals, and manufacturers do their own research in house when they’re trying to develop a product,” said Deborah J. Stipek, the dean of Stanford University’s school of education. “In education, there’s kind of an organizational disconnect there.”
Even researchers who do their work in schools rarely bring what they have learned back to the classrooms where their investigations occurred.
As a result, experts say, researchers tend to produce studies steeped in the language of their academic cultures. Educators view those writings as inaccessible, arcane, and irrelevant to their everyday jobs.
Even if teachers have the fortitude to plow through academic journals, chances are their professional training did not include coursework in how to distinguish good research from bad.
At the same time, universities offer few incentives for young researchers to take on anything other than traditional scholarly studies in education. Producing materials and programs that schools can use is often not the kind of research that gets published in peer-reviewed journals or that earns tenure for a young scholar.
“Professors of practice who do more applied kinds of work are often seen as second-class citizens,” said Ms. Lagemann, who would like to see an upgrade in that status.
Frederic A. Moser, a consultant and a former program officer with the Carnegie Corporation of New York, agreed. He is a proponent of fostering more practical educational knowledge.
“There’s always been an ambivalence between essentially being aimed at serving a profession and being a discipline,” Mr. Moser said. “My advice is to get over it. I think we’re all hung up on admitting that what education research per se ought to be doing is producing stuff that ought to be used.”
But working in the practical world can be uncomfortable for scholars used to more cloistered academic environments, according to John Q. Easton, who helped found the Consortium on Chicago School Research, which studies the impact of major policy changes in the Chicago public schools. The consortium’s decidedly utilitarian work has sometimes put it in the thick of bitter public debates over policies to end social promotion, for example, or to decentralize schools.
“There’s a lot more publicness to this, so you’re out there more,” Mr. Easton said. “You’re more exposed.”
Another deterrent for researchers, observers of the field have noted, is the limited marketplace for practical knowledge in education, unlike in fields such as engineering, medicine, and pharmacology. Without private businesses to underwrite development work in education research or much in the way of federal dollars to support it, experts such as Mr. Glennan contend, too many studies in education wind up being neither practical nor scientifically credible.
Big bird and Pasteur
Despite all those barriers, some research has managed to find its way into useful educational materials.
One notable example are the findings from psychology and educational theory that fed into the creation of “Sesame Street,” the public- television show credited since the 1970s with teaching millions of preschoolers fundamental concepts of reading and arithmetic.
By the 1990s, the ever-present drip of work from practical-minded researchers had increased to at least a thin trickle. The movement was spurred in part by the publication of Pasteur’s Quadrant by Donald E. Stokes, a Princeton University politics and public-policy professor who died in 1997. Published the same year by the Washington-based Brookings Institution, his book argued for joining the separate worlds of “basic” and “applied” research in all fields of scientific endeavor.
“I think people were chafing under the basic-applied dichotomy,” said Susan Fuhrman, the dean of the University of Pennsylvania’s graduate school of education. “This was a way of viewing the world that people were coming to, though they were not yet describing it.”
Ms. Fuhrman, who in 1985 helped found the Center for Policy Research in Education, was among the handful of educational researchers who were pointing their careers in that direction.
The raison d’être of the federally financed center, whose headquarters is at Ms. Fuhrman’s university in Philadelphia, is to meet the research needs of education policymakers and track the impact of policy changes in the field. On any given day, researchers from the center might find themselves advising a big-city mayor, testifying before a congressional committee, or conducting a workshop for state legislators.
Researchers interested in promoting usability also point to the Learning Technology Center at Vanderbilt University’s Peabody College, where research in cognitive science is often blended with educational technology to create practical products for the classroom. One example is the “Adventures of Jasper Woodbury,” a videodisc and CD-ROM program for teaching mathematical problem- solving skills to elementary school students that was developed by John D. Bransford and his colleagues at the Nashville-based center.
The influential report on early reading produced in 1998 by the National Research Council, a branch of the congressionally chartered National Academy of Sciences, also is cited as an example of usable research. The work of a national consensus panel, the report provided guidance for policymakers and practitioners who were caught in the wars over the best methods for teaching reading.
Likewise, researchers say, ongoing studies aimed at figuring out how to take an innovation that works in one school and successfully transplant it into hundreds more could also one day provide some useful knowledge for the field.
“That’s the kind of knowledge that many people would like to have. Therefore, it’s something the research community really needs to answer,” said Christopher Dede, a researcher at the Harvard graduate school of education. Mr. Dede, a professor of learning technologies, is putting together a conference on the subject next month, the first of what will be a series of meetings aimed at exploring usable education knowledge.
‘Useful’ vs. ‘scientific’
“Consensus” panels, videodisc programs, studies, and technical advice, however, are all very different sorts of products, which raises the question: Just what is “usable”?
“I think people are throwing a lot of terms around now,” said Ms. Stipek of Stanford, who is heading a year- old, national network underwritten by the Chicago-based John D. and Catherine T. MacArthur Foundation, to explore the concept.
In its quest to get a grip on the subject, the group is drafting its own typology for the kind of research that qualifies as useful, as well as conducting and studying such efforts nationwide.
At bottom, though, Ms. Stipek said all such research ideally would involve ongoing collaboration between researchers and practitioners, so that researchers address the questions frontline educators are asking.
Another way to think about it, said Lauren B. Resnick, a cognitive psychologist at the University of Pittsburgh and a leading practitioner of usable research, might be “problem-solving research and development.” Ms. Resnick in the 1990s headed a National Academy of Education panel that explored the subject, using some of the thinking from Pasteur’s Quadrant.
“The core feature of it is that a researcher takes on a real education problem that needs a solution, and makes a commitment to staying with that problem in a real-world setting for as long as you need to—at least until you start to get some traction on a practical solution,” Ms. Resnick said.
“But you have to simultaneously agree that what you do is not just make something work in place,” she said, “but also yield some explanations as to why it works.”
Making that happen in lots of schools and districts, though, might require a completely new infrastructure for supporting research, according to a forthcoming report from the National Research Council. Since 1996, the council, through two different national panels, has been developing plans for what it calls a Strategic Education Research Partnership, or SERP, a long-term national effort to foster and support research findings that practitioners find useful and scientists view as credible.
“You get people who are very dedicated and do good work in one or two schools, but to carry it forward, the work has to be accessible to people in other schools,” said Ms. Donovan, the associate director of the more recent of those panels. “That requires an infrastructure to be there, and we currently don’t have that.”
The organizational framework her group has in mind would involve forming a series of interrelated research networks, both privately and publicly financed, spanning the country. Each would be headed jointly by a researcher and a practitioner and would spend up to 15 years exploring a key, but different, question in education. The group’s second report, due out in late spring or early summer, is supposed to ink in the details on how such a structure might work and what it could yield.
At the U.S. Department of Education, officials are banking on a different sort of mechanism to connect practitioners to useful information. When it’s up and running later this year or next, the $18.5 million What Works Clearinghouse will provide an online computer registry of research-based programs, policies, and materials.
“Use-inspired knowledge is critical to what we’re all about,” said Grover J. “Russ” Whitehurst, the director of the Institute of Education Sciences, the department’s recently reconfigured agency overseeing research. To figure out what issues are important to the field, he said, the institute also relies on advisory panels of practitioners and researchers.
The trouble with the clearinghouse and the SERP proposal, said Mr. Moser, the consultant, is that they may be underestimating the job before them.
“They think there’s stuff out there now that can be evaluated, and there just isn’t,” he said. That’s in part because too few education studies meet the scientific standards that policymakers are beginning to set for them.
For its part, the Education Department is trying to address the supply problem by channeling new money for experimental studies in early reading instruction, preschool literacy instruction, alternative certification of teachers, English-language learning, and a variety of other subjects.
The issue of scientific quality, however, has become a contentious one in the national conversation about usability. President Bush’s administration and Congress have put a premium on making education research more “scientifically based”—to the point that the “No Child Left Behind” Act of 2001 mentions the phrase, or some form of it, 111 times.
Some researchers see that emphasis as compatible with the push to make studies useful. As Mr. Easton of the Chicago Consortium pointed out, “Usable research is not bad research packaged to be more usable.”
But others worry that the “scientifically based” movement emphasizes particular research methodologies, such as pure experiments, at the expense of other methods that could yield equally valuable insights for practitioners.
Mr. Whitehurst noted, however, that experimental studies are just a different stage in the research continuum—and one that educators in the end have to pay attention to.
“A district might have someone come in and do case studies or focus groups to find out what the issues are in the district,” he said. “But once you get to the issue of whether it works or not, you ultimately want to get to the point where you can use quantitative, experimental designs.”
Missing the trees?
While randomized, experimental studies may indeed be a “gold standard” for research, some researchers note that they can’t answer every education question. Statistical experiments cannot, for instance, explain why an intervention works, describe how it’s being implemented, or shed light on whether teachers and administrators understand it.
Another problem: In the messy real world, programs often change as they’re put into place. It’s hard, researchers point out, to take quantitative measures of moving targets.
Unexpected changes are often what Barbara Neufeld, for example, encounters when she comes to study a district.
Ms. Neufeld is the founder of Education Matters, a small Cambridge, Mass., firm that provides qualitatively oriented research services to school systems and education groups. When a district changes its reform strategies in midstream, Ms. Neufeld stays with it, perhaps offering advice along the way, and always describing what plays out.
In a strict experiment, a researcher might drop the district from the study or miss out entirely on the fact that practitioners are deviating from the plan.
“If you’re interested in an academic career, you’ll never get tenure this way,” Ms. Neufeld said of the practitioner-oriented reports she produces. And, she concedes, the results she collects in one district might not necessarily translate to others.
Her clients, nonetheless, find the information they get from Ms. Neufeld useful. They include Thomas W. Payzant, the superintendent of the Boston schools.
“Without that kind of look and understanding, you may draw incorrect conclusions from a study that is just based on quantitative data,” Mr. Payzant said. “I’m not an advocate of saying the only kind of research should be what you do with double-blind kinds of experiments.”
In the end, practitioners observe, having a say in what researchers study can make all the difference in the attention they give its findings.
The fact is, said Nancy Owen, a Providence, R.I., principal who has been active in efforts to promote usability, “research is just so much more valuable to people when they see it’s of use to them.”
Coverage of research is underwritten in part by a grant from the Spencer Foundation.
Vol. 22, number 27, page 1 - © 2003 Editorial Projects in Education
Vol. 22, number 27, page 1 - © 2003 Editorial Projects in Education