A review of medical studies published from 1990 to 2003 in three prestigious journals -- the New England Journal of Medicine, JAMA and Lancet -- has called the validity of approximately one-third of them into severe question.
If a relatively 'hard' science (like medicine) has such difficulty with accuracy, then the results offered by the so-called 'soft' sciences (like sociology) should be approached with a high degree of skepticism. This is especially necessary since public policy and laws are often formed by such studies.
Consider the 'feminist' issues of rape or domestic violence. Studies that address these areas are often released in combination with policy recommendations. Indeed, they sometimes appear to be little more than a springboard from which advocates can launch a campaign for more law.
In turn, the laws that result often provide for more research. The Violence Against Women Act or VAWA -- now up for re-authorization before Congress -- is an example. VAWA includes provisions for more tax-funded research, for precisely the sort of research that created it in the first place.
And, so, a re-enforcing cycle is established: studies lead to laws that lead to similar tax-funded studies, which call for more law.
The cycle should be broken.
This does not mean that the law should be separated from the reality checks provided by solid data. Quite the contrary. It means that the current self-sustaining cycle tends to discourage contrary evidence and critical thinking about the data on which the laws rest.
This is not a mere academic matter. Inaccurate studies become entrenched in laws that govern our daily lives. Using VAWA as an example again, the Act incorrectly assumes that women, and not men, are the victims of domestic violence, and it has been influential in denying men access to shelters. This denial often extends to the older male children of women who seek assistance.
In the best of circumstances, research is unreliable outside strictly defined limitations; even within those limits, research generally provides only an indication rather than a proof.
The reliability of studies declines sharply when you move from the hard sciences to the soft ones.
'Hard science' refers to certain natural sciences, like physics and chemistry. These disciplines pursue accuracy and objectivity through observing and measuring objects or phenomena in order to produce results that can be independently replicated. In other words, hard science uses the scientific method.
'Soft science' refers to the social sciences, which include psychology, sociology, political science and other explorations of the human condition. Because human nature is not as easily observed or measured as objects, complex social interactions rarely offer replicable results.
There are just too many unpredictable and unknown factors, too few research controls. It must rely more heavily upon interpretation of data. In short, the soft sciences produce less reliable results.
Interpretation -- that is, the filtering of data through a researcher's assumptions, goals and beliefs -- is not unique to the soft sciences. It merely runs rampant there due to lack of controls. Nevertheless, all research is vulnerable to being skewed and deliberately so.
On July 11, the Associated Press reported, "Allegations of misconduct by U.S. researchers reached record highs last year as the Department of Health and Human Services received 274 complaints -- 50 percent higher than 2003 and the most since 1989 when the federal government established a program to deal with scientific misconduct."
What motivates a researcher to bias a study, survey or report?
There are many answers, from laziness to concealing incompetence and seeking prestige. In the hard sciences, the most common answer is probably "funding".
The scientific community is still reeling from recent revelations about Eric T. Poehlman, a leading researcher on aging and obesity. Poehlman simply faked the data on 17 applications for federal grants that totaled near $3 million. His 'findings,' published in prestigious medical journals, helped to define how medicine approaches the effects of menopause on women's health.
The soft sciences share all these research vulnerabilities. But, because they are less constrained by research controls, the most common answer there to what motives bias may well be "political belief."
The foregoing statement will surprise few people. For example, 'feminist research' is notorious for arriving at feminist conclusions through research that includes clear political assumptions.
It may surprise people, however, to hear that I don't think political agendas are inevitable within the soft sciences. Even on controversial subjects like rape, it is possible to find interesting studies in which researchers sincerely pursue solid data.
But you have to go back a few decades. In his book from the '70s, "Men who Rape: The Psychology of the Offender," Nicholas Groth offered a theory that sounds almost jarring to today's ears. He wrote, "One of the most basic observations one can make regarding men who rape is that not all such offenders are alike." That is, a drunken boyfriend who rapes because he does not hear the "no" being uttered should not be placed in the same research category as a back alley rapist who leaves his victim physically crippled for life.
A rape researcher could not make that statement today on a college campus. He would be fired, bludgeoned into silence, or his funding would be yanked. There is now only one acceptable view of rape; it is an act of power. There is only one research category of rapist: the oppressor.
I believe the cycle of studies leading to laws leading to studies should be broken not because I am against solid research but because I am for it. Bring skepticism and common sense to all data you hear; withhold your tax dollars.
Copyright © 2005 Wendy McElroy.