The High Cost of Not Doing Experiments

This article was originally published on The Psych Report before it became part of the Behavioral Scientist in 2017.

We can pay dearly, in blood, treasure, and well-being, for experiments that aren’t done. In the nearly fifty years that Head Start has been in existence, we have spent $200 billion on it. Head Start is a preschool program for poor, primarily minority children intended to improve their health, academic achievement, and, it was hoped, their IQs. What have we gotten for our investment? The program did improve the children’s health, and initially improved IQ and academic success. But the cognitive gains lasted only a few years; by mid–elementary school the children were doing no better than children who hadn’t been in the program.

We don’t know for sure whether the Head Start children fared any better as adults than did children who weren’t in the program. That’s because assignment to the program was not random. Kids who ended up in Head Start could have differed in any number of unknown ways from those who didn’t attend the program. All the adult outcome data, of which there is shockingly little, rely on purely retrospective information about assignment. People had to remember whether they had been in a preschool program and if so, which one. Retrospective studies are subject to a great deal of potential error, especially when the memories in question go back to events decades in the past. The retrospective studies do show apparent gains in life outcomes in adulthood for children who were in Head Start. But this result doesn’t even come up to the level of a natural experiment because it would be surprising if there weren’t preexisting differences between children who were in Head Start and those who were not.

A lot of money continues to be spent on something that may or may not be effective.

Fortunately, we know some preschool programs have a huge effect on adult outcomes. Randomized-assignment experiments with programs more intensive than Head Start produced modest IQ gains that were long-lasting, but, much more important, academic improvements and economic gains for adults who had been in the treatment groups were huge.

We can pay dearly, in blood, treasure, and well-being, for experiments that aren’t done. 

The costs of not knowing what works and what doesn’t in the way of preschool programs have been very great indeed. The $200 billion for Head Start might have been better spent on a smaller number of particularly vulnerable children, providing them with more intensive experiences. That might have produced far greater societal benefits. (And we do in fact know that the poorer the child, the greater the impact of high-quality early childhood education. It doesn’t seem to much affect outcomes for middle-class children.) Moreover, no experiments were conducted to find out what aspects of Head Start (if any) were the most effective. Is it better to focus on academics or on social factors? Half days or full days? Are two years needed or would one year make almost as much difference? The social and economic consequences of knowing the answers to such questions would be enormous. And getting the answers would have been easy and dirt cheap in comparison to what has been spent.

At least it’s unlikely that Head Start does any harm to children who participate in it. But many interventions dreamed up by nonscientists actually do harm.

Well-intentioned people invented a program to help possible trauma victims soon after a tragedy has occurred. So-called grief counselors encourage participants in a treatment group to recount the incident from their own perspective, describe their emotional responses, offer their comments on others’ reactions, and discuss their stress symptoms. The counselor assures participants that their reactions are normal and that such symptoms generally diminish with time. Some nine thousand grief counselors descended on New York City in the wake of 9/11.

Grief counseling of this sort seems like an excellent idea to me. However, behavioral scientists have conducted more than a dozen randomized experiments examining critical incident stress debriefing (CISD). They have found no evidence that the activity has a positive effect on depression, anxiety, sleep disturbance, or any other stress symptoms. There is some evidence that people who undergo CISD are more likely to experience full-blown traumatic stress disorder.

As it happens, behavioral scientists have found some interventions that actually are effective for trauma victims. A few weeks after a critical incident, the social psychologist James Pennebaker has trauma victims write down, in private, and for four nights in a row, their innermost thoughts and feelings about the experience and how it affects their lives. And that’s all. No meetings with a counselor, no group-therapy encounters, no advice about how to handle the trauma. Just a writing exercise. The experience typically has a very substantial effect on suffering from grief and stress. It’s not at all plausible to me that this exercise would be very effective. Certainly not as plausible as the idea that immediate intervention, grief sharing, and advice would be effective. But there it is. Assumptions tend to be wrong.

Society is paying a high price in dollars and human suffering for wrong assumptions. 

Pennebaker thinks his writing exercise works because it helps people, after a period of suffering and incubation, to develop a narrative to understand the event and their reactions to it. And it seems to be the case that the people who improve most are those who began the exercise with inchoate and disorganized descriptions and ended with coherent, organized narratives that gave meaning to the event.

Other well-meaning people have tried to inoculate teenagers against peer pressure to commit crimes and engage in self-destructive behavior, with results that are sometimes even more disappointing than CISD for trauma victims.

Decades ago inmates in Rahway State Prison in New Jersey decided to do something to warn at-risk adolescents of the dire consequences of criminal behavior. The inmates showed the kids what prison was like, including graphic accounts of rape and murder within its walls. An award-winning documentary christened the program Scared Straight! The name and the practice spread widely throughout the United States.

Do Scared Straight programs work? Seven experimental tests of the programs have been carried out. Every single study found the Scared Straight kids to be more likely to commit crimes than kids in the control group who were exposed to no intervention at all. On average, the increase in criminal activity was about 13 percent.

The Rahway program still exists, and to this point more than fifty thousand East New Jersey kids have passed through it. Let’s multiply fifty thousand by 13 percent. The figure we get is sixty-five hundred. That’s how many more crimes have been committed than would have been if the well-meaning convicts had never thought up their scheme. And that’s just one area of New Jersey. The program has been duplicated in many other communities. A study commissioned by the Washington State Institute for Public Policy estimated that every dollar spent on Scared Straight incurs crime and incarceration costs of more than two hundred dollars.

Why doesn’t Scared Straight work? It certainly seems to me that it should. We don’t know why it doesn’t work, and we certainly don’t know why it should be counterproductive, but that doesn’t matter. It’s a tragedy that it was invented and a crime that it hasn’t been stopped.

Why hasn’t it been stopped? I’ll venture the guess that it just seems so obvious that it should work. Many people, including many politicians, prefer to trust their intuitively compelling causal hypotheses over scientific data. It doesn’t help that scientists can’t offer any convincing explanations for why Scared Straight doesn’t work. Scientists, especially social scientists, don’t fall into the trap of holding on to their intuitive causal theories in the face of conflicting data, because they are well aware that ATTBW: Assumptions Tend to Be Wrong. (As of this writing, the A&E channel just finished airing a program which sung the praises of Scared Straight.)

mindware_cover

D.A.R.E. is another elaborate attempt to keep kids out of trouble. As part of the Drug Abuse Resistance Education program, local police officers undergo eighty hours of training in teaching techniques and then visit classrooms to present information intended to reduce drug, alcohol, and tobacco use. It’s been funded by state, local, and federal government sources to the tune of $1 billion per year. According to D.A.R.E.’s website, 75 percent of American school districts participate in the program as well as forty-three countries.

But in fact D.A.R.E., as it has been conducted for the past thirty years at least, doesn’t decrease children’s use of drugs. D.A.R.E. doesn’t admit the ineffectiveness of its programs and actively combats critics who present scientific evidence for its failures. Programs intended by D.A.R.E. to supplement or replace the original have not been thoroughly evaluated by external institutions to this point.

Why doesn’t D.A.R.E. work? We don’t know. It would be nice if we did, but causal explanations are unnecessary. As it happens, some programs intended to lower the likelihood of drug, alcohol, and tobacco use do work. These include LifeSkills Training and the Midwestern Prevention Project. These programs have elements missing from the original D.A.R.E. program, notably teaching preadolescents skills in resisting peer pressure. The inventors of D.A.R.E. made the assumption that police are important social influence agents for teenagers. A social psychologist could have told them that peers are a much more effective source of influence. The more successful programs also provide information about drug and alcohol use among teenagers and adults. Recall that such information often surprises because these rates are lower than most youngsters believe, and accurate knowledge about others’ behavior can lower rates of abuse.

Meanwhile, programs that damage young people are still being conducted, and programs that help are underused or used not at all. Society is paying a high price in dollars and human suffering for wrong assumptions.

Excerpted from Mindware: Tools for Smart Thinking, published in the United States by Farrar, Straus & Giroux. Copyright © 2015 Richard Nisbett, all rights reserved. Reprinted with permission.

Disclosure: Richard Nisbett is a member of The Psych Report’s Advisory Board.