Sex, Lies and Politics | The Nation


Sex, Lies and Politics

  • Share
  • Decrease text size Increase text size

Kids "know far too much too soon" about sex, a recent college graduate named Wendy Shalit lamented in her 1999 book A Return to Modesty, and Congress seems to agree. In recent years, it has been doing everything in its power to prevent comprehensive sexuality education from reaching America's youngsters and to promote, in its place, an educationally limited, fear-based curriculum that preaches "abstinence unless married."

Adapted from Not in Front of the Children, copyright © 2001 by Marjorie Heins. Used by arrangement with Hill & Wang, a division of Farrar, Straus & Giroux, LLC. All rights reserved.

About the Author

Marjorie Heins
Marjorie Heins, who directs the Free Expression Policy Project at the National Coalition Against Censorship, is the...

Also by the Author

One of the most persistent myths in the culture wars today is that
social science has proven "media violence" to cause adverse effects. The
debate is over; the evidence is overwhelming, researchers, pundits and
politicians frequently proclaim. Anyone who denies it might as well be
arguing that the earth is flat.

Jonathan Freedman, professor of psychology at the University of Toronto,
has been saying for almost twenty years that it just isn't so. He is not
alone in his opinion, but as a psychologist trained in experimental
research, he is probably the most knowledgeable and qualified to express
it. His new book, Media Violence and Its Effect on Aggression,
surveys all of the empirical studies and experiments in this field, and
finds that the majority do not support the hypothesis that violent
content in TV and movies has a causal relationship to real violence in
society. The book is required reading for anyone who wishes to
understand this issue.

I should say at the outset that unlike Freedman, I doubt whether
quantitative sociological or psychological experiments--useful as they
are in many areas--can tell us much about the effects of something as
broad and vague in concept as "media violence." As a group of scholars
put it recently in a case involving censorship of violent video games:

In a field as inherently complex and multi-faceted as human aggression,
it is questionable whether quantitative studies of media effects can
really provide a holistic or adequately nuanced description of the
process by which some individuals become more aggressive than others.

Indeed, since "media violence" encompasses everything from cartoons,
sports and news to horror movies, westerns, war documentaries and some
of the greatest works of film art, it baffles me how researchers think
that generalizations about "effects" can be made based on experiments
using just one or a few examples of violent action.

Freedman, by contrast, believes that the experimental method is capable
of measuring media effects. This may explain why he is so indignant
about the widespread misrepresentations and distortions of the research

He explains in his preface that he became interested in this area by
happenstance, and was surprised when he began reading the research to
find that its results were quite the opposite of what is usually
asserted. He began speaking and writing on the subject. In 1999 he was
approached by the Motion Picture Association of America (MPAA) and asked
to do a comprehensive review of all the research. He had not previously
received organizational support and, as he says, "was a little nervous
because I knew there was a danger that my work would be tainted by a
connection with the MPAA." He agreed only after making it clear that the
MPAA "would have no input into the review, would see it only after it
was complete, and except for editorial suggestions, would be forbidden
to alter what I wrote. Of course," he says,

they asked me to do the review, rather than someone else, because they
knew my position and assumed or at least hoped that I would come to the
same conclusion after a more comprehensive review. But there was no quid
pro quo. Although I was nervous about being tainted, I am confident that
I was not. In any case, the conclusions of this review are not different
from those of my earlier review or those I expressed in papers and talks
between 1984 and 1999.

The book proceeds meticulously to examine the approximately 200 studies
and experiments that Freedman was able to find after an exhaustive
search. (He suggests that the exaggerated numbers one often
hears--1,000, 3,500 or simply "thousands" of studies--probably derive
from a statement made by psychologist John Murray in the early 1980s
when the National Institute of Mental Health sponsored a review of the
media violence research. Murray said that there were about 2,500
publications of all kinds that were relevant to the review. This is far
different, of course, from the number of empirical experiments and

Freedman begins with laboratory experiments, of which he found
eighty-seven. Many commentators have noted the artificiality of these
experiments, in which snippets of a violent film or TV show are shown to
one group of viewers (sometimes children, sometimes adolescents or
adults), while a control group is shown a nonviolent clip. Then their
level of "aggression" is observed--or rather, something that the
experimenters consider a proxy for aggression, such as children hitting
a Bobo doll (an inflatable plastic clown), delivering a "white noise"
blast or--amazingly--answering yes when asked whether they would pop a
balloon if given the opportunity.

As Freedman and others have pointed out, these laboratory proxies for
aggression are not the real thing, and aggressive play is very different
from real-world violent or destructive behavior. He comments:

Quite a few studies with children defined aggression as hitting or
kicking a Bobo doll or some other equivalent toy.... As anyone who has
owned one knows, Bobo dolls are designed to be hit. When you hit a Bobo
doll, it falls down and then bounces back up. You are supposed to hit it
and it is supposed to fall down and then bounce back up. There is little
reason to have a Bobo doll if you do not hit it. Calling punching a Bobo
doll aggressive is like calling kicking a football aggressive. Bobos are
meant to be punched; footballs are meant to be kicked. No harm is
intended and none is done.... It is difficult to understand why anyone
would think this is a measure of aggression.

Freedman notes other serious problems with the design of lab experiments
to test media effects. When positive results are found, they may be due
simply to the arousal effect of high-action entertainment, or to a
desire to do what the subjects think the experimenter wants. He points
out that experimenters generally haven't made efforts to assure that the
violent and nonviolent clips that they show are equivalent in other
respects. That is, if the nonviolent clip is less arousing, then any
difference in "aggression" afterward is probably due to arousal, not
imitation. Freedman's favorite example is an experiment in which one
group of subjects saw a bloody prizefight, while the control group was
shown a soporific film about canal boats.

But the most striking point is that even given the questionable validity
of lab experiments in measuring real-world media effects, the majority
of experiments have not had positive results. After detailed analysis of
the numbers that the researchers reported, Freedman summarizes:
Thirty-seven percent of the experiments supported the hypothesis that
media violence causes real-world violence or aggression, 22 percent had
mixed results and 41 percent did not support the hypothesis. After he
factored out experiments using "the most doubtful measures of
aggression" (popping balloons and so forth), only 28 percent of the
results were supportive, 16 percent were mixed and 55 percent were
nonsupportive of the "causal hypothesis."

For field experiments--designed to more closely approximate real-world
conditions--the percentage of negative results was higher: "Only three
of the ten studies obtained even slightly supportive results, and two of
those used inappropriate statistics while the third did not have a
measure of behavior." Freedman comments that even this weak showing
"gives a more favorable picture than is justified," for "several of the
studies that failed to find effects actually consisted of many separate
studies." Counting the results of these separate studies, "three field
experiments found some support, and twenty did not."

Now, the whole point of the scientific method is that experiments can be
replicated, and if the hypothesis is correct, they will produce the same
result. A minority of positive results are meaningless if they don't
show up consistently. As Freedman exhaustively shows, believers in the
causal hypothesis have badly misrepresented the overall results of both
lab and field experiments.

They have also ignored clearly nonsupportive results, or twisted them to
suit their purposes. Freedman describes one field experiment with
numerous measures of aggression, all of which failed to support the
causal hypothesis. Not satisfied with these results, the researchers
"conducted a complex internal analysis" by dividing the children into
"initially high in aggression" and "initially low in aggression"
categories. The initially low-aggression group became somewhat more
aggressive, no matter which programs they watched, while the initially
high-aggression group became somewhat less aggressive, no matter which
programs they watched. But the children who were categorized as
initially high in aggression and were shown violent programs "decreased
less in aggressiveness" than initially high-aggression children who
watched neutral programs. The researchers seized upon this one highly
massaged and obscure finding to claim that their results supported the
causal hypothesis.

Freedman examines other types of studies: surveys that compare cities or
countries before and after introduction of television; experiments
attempting to assess whether media violence causes "desensitization";
longitudinal studies that measure correlations between aggressiveness
and preference for violent television over time. No matter what the type
of study or experiment, the results overall are negative. Contrary to
popular belief, there is no scientific support for the notion that media
violence causes adverse effects.

Why, then, have not only researchers and politicians but major
professional associations like the American Academy of Pediatrics and
the American Medical Association repeatedly announced that thousands of
studies have established adverse effects of media violence? One reason
was suggested to me recently by a pediatrician active in the AAP. The
organization's guidelines argue for scientific support for policy
statements. This puts the AAP in a serious bind when, as is the case
with media violence, its leaders have a strong opinion on the subject.
It's tempting then to accept and repeat assertions about the data from
leading researchers in the field--even when it is distorted or
erroneous--and that's what the professional associations have done.

Another factor was candidly suggested by Dr. Edward Hill, chair of the
AMA board, at a panel discussion held by the Freedom Forum in New York
City last year. The AMA had "political reasons," Dr. Hill said, for
signing on to a recent statement by professional organizations asserting
that science shows media violence to be harmful. The AMA is "sometimes
used by the politicians," he explained. "We try to balance that because
we try to use them also."

Because Jonathan Freedman believes the scientific method is capable of
measuring the impact of media violence, the fact that it hasn't done so
is to him strong evidence that adverse effects don't exist. I'm not so
sure. I don't think we need science to know from observation that media
messages over time can have a powerful impact--in combination with many
other factors in a person's life. Some violent entertainment probably
does increase aggression for some viewers, though for as many or perhaps
more, the effect may be relaxing or cathartic.

If the media do have strong effects, why does it matter whether the
scientific research has been misrepresented? In part, it's precisely
because those effects vary. Even psychologists who believe that the
scientific method is relevant to this issue acknowledge that style and
context count. Some feel cartoons that make violence amusing have the
worst effects; others focus on stories in which the hero is rewarded for
using violence, even if defensively.

But equally important, the continuing claims that media violence has
proven adverse effects enables politicians to obscure known causes of
violence, such as poverty and poor education, which they seem largely
unwilling to address. Meanwhile, they distract the public with periodic
displays of sanctimonious indignation at the entertainment industry, and
predictable, largely symbolic demands for industry "self-regulation."
The result is political paralysis, and an educational structure that
actually does little to help youngsters cope with the onslaught of mass
media that surround them.

Later this year, Congress will probably reauthorize an "abstinence education" program that it created in 1996 with $250 million in matching funds (to receive the full amount, states must put up at least three-quarters as much, or an additional $187.5 million). The original package was part of the Personal Responsibility and Work Opportunity Reconciliation Act--otherwise known as welfare reform. It was the 104th Congress's attempt not only to eliminate welfare but to impose conservative standards of sexual morality on low-income Americans.

The law enumerates eight principles for "abstinence education," among them that "abstinence from sexual activity is the only certain way to avoid out-of-wedlock pregnancy, sexually transmitted diseases and other associated health problems"; that "a mutually faithful monogamous relationship in the context of marriage is the expected standard of human sexual activity"; and that "sexual activity outside of the context of marriage is likely to have harmful psychological and physical effects." Gay and lesbian youngsters, unlikely to conduct sexual activity only "in the context of marriage," are thus rendered deviant or invisible, while the children of single moms or unmarried couples are implicitly told that their parents have suffered "harmful psychological and physical effects" from sex outside marriage. Funded programs cannot discuss birth control or safer-sex techniques, except to highlight (or exaggerate) their shortcomings. The goal is quite explicitly to discourage adolescents from using condoms or other contraceptives.

The premise that is usually articulated to defend this unusual pedagogy is that an unambiguous "just say no" message is the only way to reduce teen pregnancy and STDs. As Oklahoma Republican Ernest Istook, one of the program's most vocal Congressional champions, puts it, teenagers cannot understand "mixed messages." While this may be true of cultural conservatives like Istook, it is not necessarily true of American teenagers, most of whom are adept at negotiating the ambiguities of popular culture. Indeed, Istook's reductive approach to education and his low opinion of youthful intelligence have not in any event proven accurate. As Planned Parenthood, Advocates for Youth and SIECUS (the Sexuality Information and Education Council of the United States) pointed out in the wake of the 1996 law, studies have shown no decrease, and often an increase, in the age of "sexual debut" after the introduction of sex-ed programs that include straightforward contraceptive and safer-sex information in addition to abstinence messages. By contrast, there is no evidence that abstinence-only programs discourage sexual activity.

By late 1998, though, every state had applied for and received abstinence-unless-married funds. Some sought to use the money without compromising existing programs; in Maine, for example, as one official explained, "the limits on what you can say are so restrictive that we decided we could not use the money for classroom programs or anywhere else where there was face-to-face contact." Two states--California and New Hampshire--ultimately did not use the funds, in California's case because it was determined that the eight-point program violated state law by withholding contraceptive information and forbidding mention of the fact that condoms reduce the risk of AIDS. And there were a few surprises: Richmond, Virginia, withdrew from participation, its director of health declaring that the program was "ill-conceived" and "did not help children who were already sexually active or gays and lesbians." Charleston County, South Carolina, also voted against accepting the federal money; the school board chairman said, "Let's not live in a fantasy land" or "play Russian roulette with the lives of our students."

Six states, on the other hand, incorporated the federal ideology into their official sex-ed programs. The Pennsylvania Senate proclaimed "Chastity Awareness Week" and urged "Chastity Day" presentations at public schools. Franklin County, North Carolina, cut three chapters out of its ninth-grade textbook because they dealt with contraception and STDs, and the school board had not held a hearing, required by state law, before state-mandated abstinence-unless-married pedagogy could be replaced with a comprehensive curriculum. By the end of 1999 abstinence was the sole contraception method taught at more than a third of all US public schools.

The triumph of abstinence-unless-married pedagogy is the culmination of two decades' work by the religious right in first trying to defeat, then managing brilliantly to co-opt, sexuality education. The current curriculums have their roots in the fear-based "sex hygiene" programs that social-purity groups offered early in the twentieth century. These featured hideous images of the symptoms of tertiary syphilis but ignored the varieties and pleasures of sex, or even the details of anatomy. Elements of the fear-based approach persisted even after Dr. Mary Calderone and her colleagues, with the encouragement of the National Education Association and the American Medical Association, founded SIECUS in 1964 to promote comprehensive sexuality education. ("Sexuality" rather than "sex" is the preferred term because it emphasizes the social, cultural, psychological and spiritual dimensions of human sexual life, not just reproduction, anatomy and disease prevention.)

  • Share
  • Decrease text size Increase text size

Before commenting, please read our Community Guidelines.