Quantcast

Screen Rage | The Nation

  •  

Screen Rage

  • Share
  • Decrease text size Increase text size

One of the most persistent myths in the culture wars today is that
social science has proven "media violence" to cause adverse effects. The
debate is over; the evidence is overwhelming, researchers, pundits and
politicians frequently proclaim. Anyone who denies it might as well be
arguing that the earth is flat.

Jonathan Freedman, professor of psychology at the University of Toronto,
has been saying for almost twenty years that it just isn't so. He is not
alone in his opinion, but as a psychologist trained in experimental
research, he is probably the most knowledgeable and qualified to express
it. His new book, Media Violence and Its Effect on Aggression,
surveys all of the empirical studies and experiments in this field, and
finds that the majority do not support the hypothesis that violent
content in TV and movies has a causal relationship to real violence in
society. The book is required reading for anyone who wishes to
understand this issue.

I should say at the outset that unlike Freedman, I doubt whether
quantitative sociological or psychological experiments--useful as they
are in many areas--can tell us much about the effects of something as
broad and vague in concept as "media violence." As a group of scholars
put it recently in a case involving censorship of violent video games:

In a field as inherently complex and multi-faceted as human aggression,
it is questionable whether quantitative studies of media effects can
really provide a holistic or adequately nuanced description of the
process by which some individuals become more aggressive than others.

Indeed, since "media violence" encompasses everything from cartoons,
sports and news to horror movies, westerns, war documentaries and some
of the greatest works of film art, it baffles me how researchers think
that generalizations about "effects" can be made based on experiments
using just one or a few examples of violent action.

Freedman, by contrast, believes that the experimental method is capable
of measuring media effects. This may explain why he is so indignant
about the widespread misrepresentations and distortions of the research
data.

He explains in his preface that he became interested in this area by
happenstance, and was surprised when he began reading the research to
find that its results were quite the opposite of what is usually
asserted. He began speaking and writing on the subject. In 1999 he was
approached by the Motion Picture Association of America (MPAA) and asked
to do a comprehensive review of all the research. He had not previously
received organizational support and, as he says, "was a little nervous
because I knew there was a danger that my work would be tainted by a
connection with the MPAA." He agreed only after making it clear that the
MPAA "would have no input into the review, would see it only after it
was complete, and except for editorial suggestions, would be forbidden
to alter what I wrote. Of course," he says,

they asked me to do the review, rather than someone else, because they
knew my position and assumed or at least hoped that I would come to the
same conclusion after a more comprehensive review. But there was no quid
pro quo. Although I was nervous about being tainted, I am confident that
I was not. In any case, the conclusions of this review are not different
from those of my earlier review or those I expressed in papers and talks
between 1984 and 1999.

The book proceeds meticulously to examine the approximately 200 studies
and experiments that Freedman was able to find after an exhaustive
search. (He suggests that the exaggerated numbers one often
hears--1,000, 3,500 or simply "thousands" of studies--probably derive
from a statement made by psychologist John Murray in the early 1980s
when the National Institute of Mental Health sponsored a review of the
media violence research. Murray said that there were about 2,500
publications of all kinds that were relevant to the review. This is far
different, of course, from the number of empirical experiments and
studies.)

Freedman begins with laboratory experiments, of which he found
eighty-seven. Many commentators have noted the artificiality of these
experiments, in which snippets of a violent film or TV show are shown to
one group of viewers (sometimes children, sometimes adolescents or
adults), while a control group is shown a nonviolent clip. Then their
level of "aggression" is observed--or rather, something that the
experimenters consider a proxy for aggression, such as children hitting
a Bobo doll (an inflatable plastic clown), delivering a "white noise"
blast or--amazingly--answering yes when asked whether they would pop a
balloon if given the opportunity.

As Freedman and others have pointed out, these laboratory proxies for
aggression are not the real thing, and aggressive play is very different
from real-world violent or destructive behavior. He comments:

Quite a few studies with children defined aggression as hitting or
kicking a Bobo doll or some other equivalent toy.... As anyone who has
owned one knows, Bobo dolls are designed to be hit. When you hit a Bobo
doll, it falls down and then bounces back up. You are supposed to hit it
and it is supposed to fall down and then bounce back up. There is little
reason to have a Bobo doll if you do not hit it. Calling punching a Bobo
doll aggressive is like calling kicking a football aggressive. Bobos are
meant to be punched; footballs are meant to be kicked. No harm is
intended and none is done.... It is difficult to understand why anyone
would think this is a measure of aggression.

Freedman notes other serious problems with the design of lab experiments
to test media effects. When positive results are found, they may be due
simply to the arousal effect of high-action entertainment, or to a
desire to do what the subjects think the experimenter wants. He points
out that experimenters generally haven't made efforts to assure that the
violent and nonviolent clips that they show are equivalent in other
respects. That is, if the nonviolent clip is less arousing, then any
difference in "aggression" afterward is probably due to arousal, not
imitation. Freedman's favorite example is an experiment in which one
group of subjects saw a bloody prizefight, while the control group was
shown a soporific film about canal boats.

But the most striking point is that even given the questionable validity
of lab experiments in measuring real-world media effects, the majority
of experiments have not had positive results. After detailed analysis of
the numbers that the researchers reported, Freedman summarizes:
Thirty-seven percent of the experiments supported the hypothesis that
media violence causes real-world violence or aggression, 22 percent had
mixed results and 41 percent did not support the hypothesis. After he
factored out experiments using "the most doubtful measures of
aggression" (popping balloons and so forth), only 28 percent of the
results were supportive, 16 percent were mixed and 55 percent were
nonsupportive of the "causal hypothesis."

For field experiments--designed to more closely approximate real-world
conditions--the percentage of negative results was higher: "Only three
of the ten studies obtained even slightly supportive results, and two of
those used inappropriate statistics while the third did not have a
measure of behavior." Freedman comments that even this weak showing
"gives a more favorable picture than is justified," for "several of the
studies that failed to find effects actually consisted of many separate
studies." Counting the results of these separate studies, "three field
experiments found some support, and twenty did not."

Now, the whole point of the scientific method is that experiments can be
replicated, and if the hypothesis is correct, they will produce the same
result. A minority of positive results are meaningless if they don't
show up consistently. As Freedman exhaustively shows, believers in the
causal hypothesis have badly misrepresented the overall results of both
lab and field experiments.

They have also ignored clearly nonsupportive results, or twisted them to
suit their purposes. Freedman describes one field experiment with
numerous measures of aggression, all of which failed to support the
causal hypothesis. Not satisfied with these results, the researchers
"conducted a complex internal analysis" by dividing the children into
"initially high in aggression" and "initially low in aggression"
categories. The initially low-aggression group became somewhat more
aggressive, no matter which programs they watched, while the initially
high-aggression group became somewhat less aggressive, no matter which
programs they watched. But the children who were categorized as
initially high in aggression and were shown violent programs "decreased
less in aggressiveness" than initially high-aggression children who
watched neutral programs. The researchers seized upon this one highly
massaged and obscure finding to claim that their results supported the
causal hypothesis.

Freedman examines other types of studies: surveys that compare cities or
countries before and after introduction of television; experiments
attempting to assess whether media violence causes "desensitization";
longitudinal studies that measure correlations between aggressiveness
and preference for violent television over time. No matter what the type
of study or experiment, the results overall are negative. Contrary to
popular belief, there is no scientific support for the notion that media
violence causes adverse effects.

Why, then, have not only researchers and politicians but major
professional associations like the American Academy of Pediatrics and
the American Medical Association repeatedly announced that thousands of
studies have established adverse effects of media violence? One reason
was suggested to me recently by a pediatrician active in the AAP. The
organization's guidelines argue for scientific support for policy
statements. This puts the AAP in a serious bind when, as is the case
with media violence, its leaders have a strong opinion on the subject.
It's tempting then to accept and repeat assertions about the data from
leading researchers in the field--even when it is distorted or
erroneous--and that's what the professional associations have done.

Another factor was candidly suggested by Dr. Edward Hill, chair of the
AMA board, at a panel discussion held by the Freedom Forum in New York
City last year. The AMA had "political reasons," Dr. Hill said, for
signing on to a recent statement by professional organizations asserting
that science shows media violence to be harmful. The AMA is "sometimes
used by the politicians," he explained. "We try to balance that because
we try to use them also."

Because Jonathan Freedman believes the scientific method is capable of
measuring the impact of media violence, the fact that it hasn't done so
is to him strong evidence that adverse effects don't exist. I'm not so
sure. I don't think we need science to know from observation that media
messages over time can have a powerful impact--in combination with many
other factors in a person's life. Some violent entertainment probably
does increase aggression for some viewers, though for as many or perhaps
more, the effect may be relaxing or cathartic.

If the media do have strong effects, why does it matter whether the
scientific research has been misrepresented? In part, it's precisely
because those effects vary. Even psychologists who believe that the
scientific method is relevant to this issue acknowledge that style and
context count. Some feel cartoons that make violence amusing have the
worst effects; others focus on stories in which the hero is rewarded for
using violence, even if defensively.

But equally important, the continuing claims that media violence has
proven adverse effects enables politicians to obscure known causes of
violence, such as poverty and poor education, which they seem largely
unwilling to address. Meanwhile, they distract the public with periodic
displays of sanctimonious indignation at the entertainment industry, and
predictable, largely symbolic demands for industry "self-regulation."
The result is political paralysis, and an educational structure that
actually does little to help youngsters cope with the onslaught of mass
media that surround them.

Subscriber Log In:

Subscribe Now!
The only way to read this article and the full contents of each week's issue of The Nation online on the day the print magazine is published is by subscribing. Subscribe now and read this article—and every article published since 1865 in our 148 year digital archive—right now.
There's no obligation—try The Nation for four weeks free.

 

 
  • Share
  • Decrease text size Increase text size

Before commenting, please read our Community Guidelines.