Quantcast

Nation Topics - Books and the Arts | The Nation

Topic Page

Nation Topics - Books and the Arts

Subsections:

Arts and Entertainment Books and Ideas

Articles

News and Features

A friend and I were sitting around commiserating about the things that get to us: unloading small indignities, comparing thorns. "So there I was," she said, "sitting on the bus and this man across the aisle starts waving a copy of law professor Randall Kennedy's new book Nigger. He's got this mean-looking face with little raisiny eyes, and a pointy head, and he's taking this book in and out of his backpack. He's not reading it, mind you. He's just flashing it at black people."

"Don't be so touchy," I responded. "Professor Kennedy says that the N-word is just another word for 'pal' these days. So your guy was probably one of those muted souls you hear about on Fox cable, one of the ones who's been totally silenced by too much political correctness. I'd assume he was just trying to sign 'Have a nice day.'"

"Maybe so," she said, digging through her purse and pulling out a copy of Michael Moore's bestselling Stupid White Men. "But if I see him again, I'm armed with a 'nice day' of my own."

"That's not nice," I tell her. "Besides, I've decided to get in on the publishing boom myself. My next book will be called Penis. I had been going to title it Civil Claims That Shaped the Evidentiary History of Primogeniture: Paternity and Inheritance Rights in Anglo-American Jurisprudence, 1883-1956, but somehow Penis seems so much more concise. We lawyers love concision."

She raised one eyebrow. "And the mere fact that hordes of sweaty-palmed adolescents might line up to sneak home a copy, or that Howard Stern would pant over it all the way to the top of the bestseller list, or that college kids would make it the one book they take on spring break----"

"...is the last thing on my mind," I assured her. "Really, I'm just trying to engage in a scholarly debate about some of the more nuanced aspects of statutory interpretation under Rule 861, subsection (c), paragraph 2... And besides, now that South Park has made the word so much a part of popular culture, I fail to see what all the fuss is about. When I hear young people singing lyrics that use the P-word, I just hum along. After all, there are no bad words, just ungood hermeneutics."

"No wonder Oprah canceled her book club," she muttered.

Seriously. We do seem to have entered a weird season in which the exercise of First Amendment rights has become a kind of XXX-treme Sport, with people taking the concept of free speech for an Olympic workout, as though to build up that constitutional muscle. People speak not just freely but wantonly, thoughtlessly, mainlined from their hormones. We live in a minefield of scorched-earth, who-me-a-diplomat?, let's-see-if-this-hurts words. As my young son twirls the radio dial in search of whatever pop music his friends are listening to, it is less the lyrics that alarm me than the disc jockeys, all of whom speak as though they were crashing cars. It makes me very grateful to have been part of the "love generation," because for today's youth, the spoken word seems governed by people from whom sticks and stones had to be wrested when they were children--truly unpleasant people who've spent years perfecting their remaining weapon: the words that can supposedly never hurt you.

The flight from the imagined horrors of political correctness seems to have overtaken common sense. Or is it possible that we have come perilously close to a state where hate speech is the common sense? In a bar in Dorchester, Massachusetts, recently, a black man was surrounded by a group of white patrons and taunted with a series of escalatingly hostile racial epithets. The bartender refused to intervene despite being begged to do something by a white friend of the man. The taunting continued until the black man tried to leave, whereupon the crowd followed him outside and beat him severely. In Los Angeles, the head of the police commission publicly called Congresswoman Maxine Waters a "bitch"--to the glee of Log Cabin Republicans, who published an editorial gloating about how good it felt to hear him say that. And in San Jose, California, a judge allowed a white high school student to escape punishment after the student, angry at an African-American teacher who had suspended his best friend, scrawled "Thanks, Nigga" on a school wall. The judge was swayed by an argument that "nigga" is not the same as "nigger" but rather an inoffensive rap music term of endearment common among soul brothers.

Frankly, if Harvard president Lawrence Summers is going to be calling professors to account for generating controversy not befitting that venerable institution, the disingenuous Professor Kennedy would be my first choice. Kennedy's argument that the word "nigger" has lost its sting because black entertainers like Eddie Murphy have popularized it, either dehistoricizes the word to a boneheaded extent or ignores the basic capaciousness of all language. The dictionary is filled with words that have multiple meanings, depending on context. "Obsession" is "the perfume," but it can also be the basis for a harassment suit. Nigger, The Book, is an appeal to pure sensation. It's fine to recognize that ironic reversals of meaning are invaluable survival tools. But what's selling this book is not the hail-fellow-well-met banality of "nigger" but rather the ongoing liveliness of its negativity: It hits in the gut, catches the eye, knots the stomach, jerks the knee, grabs the arm. Kennedy milks this phenomenon only to ask with an entirely straight face: "So what's the big deal?"

The New Yorker recently featured a cartoon by Art Spiegelman that captures my concern: A young skinhead furtively spray-paints a swastika on a wall. In the last panel, someone has put the wall up in a museum and the skinhead is shown sipping champagne with glittery fashionistas and art critics. I do not doubt that hateful or shocking speech can be "mainstreamed" through overuse; I am alarmed that we want to. But my greater concern is whether this gratuitous nonsense should be the most visible test of political speech in an era when government officials tell us to watch our words--even words spoken in confidence to one's lawyer--and leave us to sort out precisely what that means.

In this excerpt from his 2002 book, Studs Terkel recalls an encounter with Dennis Kucinich, "the boy-mayor of Cleveland," and follows his political odyssey.

Nearly four years have elapsed since that merry month of May when France and the whole world were taken aback by a sudden and momentous upheaval.

Jeff Tweedy may be best known to Nation readers as Billy Bragg's collaborator (along with his band Wilco) on the Mermaid Avenue recordings of recent years--two great albums that set unpubli

"It's a great mistake not to feel pleased when you have the chance," a rich, disfigured spinster advises a frail, well-mannered boy in The Shrimp and the Anemone, the first novel in L.P. Hartley's Eustace and Hilda trilogy. The boy has won a hand of piquet, and the spinster has noticed that he has difficulty
enjoying triumphs. Miss Fothergill (like many of Hartley's characters, the spinster has an outlandishly characteristic name) foresees that her 10-year-old friend may not have ahead of him many occasions of pleasure to waste.

Rather than disobey Miss Fothergill, I will readily admit that I have felt pleased while reading Eustace and Hilda, and very pleased while reading Hartley's masterpiece, The Go-Between. It was a spice to my pleasure that even though the Eustace and Hilda trilogy was first published between 1944 and 1947, and The Go-Between in 1953, I had not even heard of L.P. Hartley before the novels were reissued recently as New York Review Books Classics.

I blame my ignorance on an academic education. Hartley is not the sort of author discussed in schools. He is in no way postmodern. He is modern only in his frugality with sentiment and his somewhat sheepish awareness that the ideas of Marx and Freud are abroad in the world, rendering it slightly more tricky than it used to be to write unself-consciously about unathletic middle-class English boys who have been led by their fantasies and spontaneously refined tastes into the country homes of the aristocracy. If Hartley belongs to any academic canon, it would be to the gay novel, whose true history must remain unwritten until the theorists have been driven from the temple and pleasure-loving empiricists loosed upon the literary critical world.

Hartley belongs with Denton Welch and J.R. Ackerley. The three have different strengths: Welch is sensuous, Ackerley is funny and Hartley is a delicate observer of social machinery. But all are sly and precise writers, challenged by a subject inconvenient for novelizing: the emotional life of gay men.

They met the challenge with unassuming resourcefulness, writing what might be called fairy tales. Hans Christian Andersen was their pioneer, as the first modern writer to discover that emotions considered freakish and repellent in adults could win sympathy when expressed by animals and children. Andersen also discovered that a plain style was the best disguise for this kind of trickery and that the disgust of even the most intolerant readers could be charmed away by an invitation to learn how queer characters came to be the way they are. Thus in Ackerley, Welch and Hartley one finds gentle transpositions--from human to animal, from adulthood to childhood, from health to illness--disarmingly exact language and just-so stories about strange desires. Once upon a time, a man fell in love with another man's dog. Once upon a time, a boy on a bicycle was hit by a car and could not find pleasure again except in broken things. Once upon a time, a boy was made to have tea with a crooked-faced, dying woman, and to his surprise he liked her. The effect is a mood of tenderness; the stories are sweet and a bit mournful.

Hartley loved Hans Christian Andersen, but it was another writer who provided him with a defense of gentle transposition as a novelistic practice: Nathaniel Hawthorne, whose daguerreotype by Mathew Brady is the disconcertingly austere frontispiece of The Novelist's Responsibility, Hartley's 1967 collection of literary criticism. In the preface to The Blithedale Romance, Hawthorne had described the novelist's need for a "Faery Land, so like the real world, that in a suitable remoteness one cannot well tell the difference, but with an atmosphere of strange enchantment, beheld through which the inhabitants have a propriety of their own." Hartley quoted the passage with approval.

Lost time was Hartley's fairyland. "The past is a foreign country: they do things differently there," he wrote in the first, and most famous, sentence of The Go-Between. (He may have been echoing the first sentence of A Sentimental Journey, where Laurence Sterne had written, "They order...this matter better in France," which was Sterne's fairyland.) The remembered world could be as rich and vivid as the real one and yet would always stand at a remove. One could visit but not live there. As Hawthorne explained in his introduction to The Scarlet Letter, in another passage quoted by Hartley, there is something romantic about "the attempt to connect a bygone time with the very present which is flitting away from us."

The Go-Between opens with such an attempt. Leo Colston, a bachelor librarian in his 60s, has begun to sort his papers--apparently in preparation for his death, since he seems to have nothing else to look forward to. He starts by opening "a rather battered red cardboard collar-box." It is full of childhood treasures: "two dry, empty sea-urchins; two rusty magnets, a large one and a small one, which had almost lost their magnetism; some negatives rolled up in a tight coil; some stumps of sealing-wax; a small combination lock with three rows of letters; a twist of very fine whipcord; and one or two ambiguous objects, pieces of things, of which the use was not at once apparent: I could not even tell what they had belonged to." At the bottom of the box is a diary, and at first Colston cannot remember what the diary contains. Then he remembers why he does not want to remember it.

My secret--the explanation of me--lay there. I take myself much too seriously, of course. What does it matter to anyone what I was like, then or now? But every man is important to himself at one time or another; my problem had been to reduce the importance, and spread it out as thinly as I could over half a century. Thanks to my interment policy, I had come to terms with life...

A secret naturally arouses the reader's curiosity, but Colston's attitude toward his secret is a further provocation. The events in the diary, he implies, were both inconsequential and traumatic. He preferred a lifelong effort of forgetting over any attempt to come to terms; only by burying "the explanation of me" could he find a way to live. "Was it true...that my best energies had been given to the undertaker's art? If it was, what did it matter?" An unacknowledged wound, a buried definition of the self... The penumbra around Colston's secret is typical of a closeted homosexual, and yet what follows is neither a same-sex love story nor a coming-out narrative.

In the course of the novel, Colston does discover the facts of life and has at least an intuition of his oblique relation to them, but in The Go-Between Hartley was most intensely concerned with his hero's first experiences of sin and grace. This second, more surprising parallel with Hawthorne is the crucial one. Hartley once wrote that "Hawthorne thought that human nature was good, but was convinced in his heart that it was evil." Hartley was in a similar predicament.

Who would have guessed that the Edwardian sexual awakening of a delicate, precociously snobbish 13-year-old would have anything in common with the Puritan crimes and penitence that fascinated Hawthorne? Yet for Hartley, as for Hawthorne, the awareness of sin is a vital stage of education and a condition of maturity. At first young Leo Colston resists it. "It was like a cricket match played in a drizzle, where everyone had an excuse--and what a dull excuse!--for playing badly."

His moral code at the outset is the pagan one of schoolboys; he believes in curses and spells, and in triumphing over enemies by any means except adult intervention. But at the invitation of a classmate, Leo spends his summer vacation at Brandham Hall, a well-appointed Georgian mansion in Norfolk, and there his world is softened by love, in the person of the classmate's older sister, Marian. She is beautiful, musical and headstrong. Leo brings her messages from her fiancé, Hugh Winlove, Lord Trimingham, and billets from her lover, a local farmer named Ted Burgess. With her love comes sin--not because sexuality is evil, though it may be, but because after he has felt its touch, Leo can no longer think of the people he struggles with as enemies. The lovers make a terrible use of him, but he cares most about those who use him worst. In their triangle, he is incapable of taking a side; he is, after all, their go-between.

If you map Hartley onto Hawthorne too methodically, you arrive at the odd conclusion that Leo is part Chillingworth, part Pearl. This is not quite as silly as it sounds. Like them, Leo is jealous of the lovers he observes and is trapped in their orbit; nothing is lost on him, and he is unable to make emotional sense of what he knows. (His apprehension without comprehension is a boon for the reader, who through him sees the social fabric in fine focus.) But unlike Hawthorne's characters, Leo is a boy starting his adolescence, and that process, which he fears will defeat him, is at the heart of The Go-Between. Leo knows that the end of his childhood ought to be "like a death, but with a resurrection in prospect." His resurrection, however, is in doubt.

Like most fairy tales, the tale of how Leo becomes a fairy will not be fully credible to worldly readers. The Oedipal struggle will seem too bald, the catastrophe too absolute. Hartley was aware of this shortcoming. He knew that he found sexuality more awful than other people did, and in The Novelist's Responsibility, he wrote about his attempt to compensate for it while writing the Eustace and Hilda trilogy: "I remember telling a woman novelist, a friend of mine, about a story I was writing, and I said, perhaps with too much awe in my voice, 'Hilda is going to be seduced,' and I inferred that this would be a tragedy. I shall never forget how my friend laughed. She laughed and laughed and could not stop: and I decided that my heroine must be not only seduced, but paralysed into the bargain, if she was to expect any sympathy from the public."

Hartley's friend would probably have laughed at Hilda's paralysis, too. In the trilogy, Hilda is the older, stronger-willed sister of the exquisitely polite Eustace, who grows up in her shadow, a little too fond of its darkness. Their symbiosis in the first volume is brilliant and chilling, but her paralysis in the third is unconvincing. It is implausible that the demise of a love affair would literally immobilize an adult woman. Fortunately, it happens offstage, and a few of the book's characters do wonder if she is malingering.

However, the lack of perspective may be inextricable from Hartley's gifts. His writing is so mournful and sweet because he is willing to consider seriously terrors that only children ought to have, and perhaps only a man who never quite figured manhood out could still consider them that way. The second and third volumes of Eustace and Hilda are as elegant as the first, but not as satisfying, because Eustace's life becomes too vicarious to hold the reader's attention--and because the characters have grown up. Hartley's understanding of children is sophisticated, but he seems to have imagined adults as emotionally limited versions of them--as children who have become skilled at not thinking unpleasant thoughts. As a writer, his best moments are in describing terror at age 13 and the realization at 60-odd that one need not have been so terrified after all. In The Go-Between, artfully, the intervening years are compressed into the act of recollection, and the novel's structure fits the novelist's talents like a glove.

There is an overall disposition to approach each Whitney Biennial as a State of the Art World Address in the form of an exhibition, organized by a curatorial directorate, presenting us with a reading, more or less objective, of what visual culture has been up to in the preceding two years. It is widely appreciated that on any given occasion, the directorate will be driven by enthusiasms and agendas that compromise objectivity. So there has sprung up a genre of what we might call Biennial Criticism, in which the organizers are taken to task for various distortions, and when these have been flagrant, as in the 1993 or, to a lesser degree, the 1995 Biennial, the critics almost speak as one. Everyone knew, in 1993, that a lot of art was being made that took the form of aggressively politicized cultural criticism, but the Biennial made it appear that there was very little else, and it had the effect of alienating the viewers by treating them as enemies. Again, everyone recognized in 1995 that artists were exploring issues of gender identity--but there was a question of whether these preoccupations were not overrepresented in what was shown. Anticipating the barrage of critical dissent, the Whitney pre-emptively advertised the 2000 Biennial as the exhibition you love to hate, making a virtue of adversity. But Biennials and Biennial Criticism must be taken as a single complex, which together provide, in the best way that has so far evolved, as adequate a picture as we are likely to get of where American artistic culture is at the moment. The Whitney deserves considerable credit for exposing itself to critical onslaughts from various directions in this periodic effort to bring the present art world to consciousness. Art really is a mirror in which the culture gets to see itself reflected, but it requires a fair amount of risk and bickering to get that image to emerge with any degree of clarity.

As it happens, my own sense of the state of the art world is reasonably congruent with that of Lawrence Rinder, who bears chief responsibility for Biennial 2002, though I have to admit that I was unfamiliar with a good many of the artists whose work has been selected. This unfamiliarity can even be taken as evidence that Rinder's selection corresponds to the general profile of art-making today.

It is almost as though any sample drawn from the art world would yield much the same profile of artistic production, so long as it consisted mainly of artists in their 30s and early 40s who have been formed in one or another of the main art schools and keep up with the main art periodicals. A great Biennial could have been put together using older artists with international reputations, but somehow emphasizing the young does not seem a curatorial caprice. It is increasingly an art-world premise that what is really happening is to be found among the young or very young, whose reputations have not as yet emerged. A painter who taught in California told me that he was constantly pressed, by dealers and collectors, to tell them who among the students was hot. So as long as it resembles a fairly large show of MFA students graduating from a major art school--as Biennial 2002 mostly does--a quite representative Biennial can be put together of artists whose work is hardly known at all. Somehow, if it were widely known, it would not have been representative.

Art today is pretty largely conceptual. It is not Conceptual Art in the narrow sense the term acquired when it designated one of the last true movements of late Modernism, in which the objects were often negligible or even nonexistent, but rather in the sense that being an artist today consists in having an idea and then using whatever means are necessary to realize it. Advanced art schools do not primarily teach skills but serve as institutes through which students are given critical support in finding their own way to whatever it takes to make their ideas come to something. This has been the case since the early 1970s.

It is amazing how many young people want to be artists today. I was told that there are about 600 art majors in a state university in Utah--and there will be at least that many applicants for perhaps twenty places in any one of the major MFA programs, despite a tuition equal to that for law or business school. Few will find teaching positions, but their main impulse is to make art, taking advantage of today's extreme pluralism, which entails that there are no antecedent prohibitions on how their art has to be. Every artist can use any technology or every technology at once--photography, video, sound, language, imagery in all possible media, not to mention that indeterminate range of activities that constitute performances, working alone or in collaboratives on subjects that can be extremely arcane.

Omer Fast shows a two-channel video installation with surround sound about Glendive, Montana, selected because it is the nation's smallest self-contained television market. Who would know about this? Or about Sarah Winchester, who kept changing the architecture of her house in San Jose, California, because she felt she was being pursued by victims of the Winchester rifle, which her late husband manufactured, which Jeremy Blake chose as the subject of a 16-millimeter film, augmented by drawings and digital artworks transferred to DVD? I pick these out not as criticism but as observations. They exemplify where visual culture is today.

Initially I felt that painting was somewhat underrepresented, but on reflection I realize that there is not much of the kind of easel painting done now that makes up one's composite memory of Biennials past. What I had to accept was that artists today appropriate vernacular styles and images--graffiti, cartoons, circus posters and crude demotic drawing. Artists use whatever kinds of images they like. Much as one dog tells another in a New Yorker cartoon that once you're online, no one can tell you're a dog, it is less and less easy to infer much about an artist's identity from the work.

At least three graduate students in a leading art school I visited not long ago choose to paint like self-taught artists. The self-taught artist Thornton Dial Senior appeared in Biennial 2000, but his contribution did not look like anyone's paradigm of outsider art, so no one could have known that it was not by an MFA from the Rhode Island School of Design or CalArts. There are some quilts in Biennial 2002 by Rosie Lee Tomkins, who is Afro-American, as we can tell from items in her bibliography (Redesigning Cultural Roots: Diversity in African-American Quilts). Since this year's catalogue does not identify artists with reference to their education, we don't know--nor does it matter--whether Tomkins is self-taught. But it is entirely open to white male graduate students to practice quilt-making as their art if they choose to.

Whether someone can paint or draw is no more relevant than whether they can sew or cook. Everything is available to everyone--the distinctions between insider and outsider, art and craft, fine art and illustration, have altogether vanished. I have not yet seen a Biennial with the work of Sophie Matisse or George Deem in it, both of whom appropriate the painting styles of Vermeer and other Old Masters, but they express the contemporary moment as well as would an artist who drew Superman or The Silver Surfer. Mike Bidlo--also not included--has been painting Jackson Pollocks over the past few years. In a way I rather admire, Biennial 2002 presents us with a picture not just of the art world but of American society today, in an ideal form in which identities are as fluid and boundaries as permeable as lifestyles in general.

The openness to media outside the traditional ones of painting, drawing, printmaking, photography and sculpture has made it increasingly difficult to see everything on a single visit in the recent Biennials, and this is particularly so in Biennial 2002. But just seeing the things that can be taken in on such a visit may not give the best idea of what is really happening in the art world. Biennial 2002 includes the work of eight performance artists or teams of performance artists, for example, and theirs may be among the most revealing work being done today; but you will have to read about their work in the catalogue, since the performances themselves do not take place on the premises of the museum. I'll describe three artists whose most striking work is performance, since together they give a deeper sense of visual culture than we might easily get by looking at the objects and installations in the museum's galleries.

Let's begin with Praxis--a performance collaborative formed in 1999 that consists of a young married couple, Delia Bajo and Brainard Carey. On any given Saturday afternoon, Praxis opens the East Village storefront that is its studio and home to passers-by. The ongoing performance, which they title The New Economy, consists in offering visitors any of four meaningful but undemanding services from the artists: a hug, a footbath, a dollar or a Band-Aid, which comes with the kind of kiss a mommy gives to make it all better. Praxis draws upon a fairly rich art history. Its services are good examples of what were considered actions by Fluxus, an art movement that has frequently figured in this column. Fluxus originated in the early 1960s as a loose collective of artists-performers-composers who were dedicated, among other things, to overcoming the gap between art and life. The movement drew its inspiration from Marcel Duchamp, John Cage and Zen--and from the visionary figure George Maciunas, who gave it its name. It is a matter for philosophers to determine when giving someone a hug is a piece of art--but an important consideration is that as art it has no particular connection to the art market, nor is it the sort of thing that is easily collected. And it requires no special training to know how to do it.

There is something tender and affecting in Praxis's ministrations, which connects it to a second art-historical tradition. It has, for example, a certain affinity to Felix Gonzales-Torres, who piled up candies in the corner of a gallery for people to help themselves to, or to the art of Rirkrit Tiravanija, which largely consists in feeding people fairly simple dishes, which he cooks for whoever comes along. Praxis's art is comforting, in much the way that Tiravanija's work is nurturing. The people who enter Praxis's storefront are not necessarily, as the artists explain, seeking an art experience. Neither are those who eat Tiravanija's green curry in quest of gastronomic excitement. The artists set themselves up as healers or comfort-givers, and the art aims at infusing an increment of human warmth into daily life. There was not a lot of that in Fluxus, but it has become very much a part of art today, especially among younger artists. The moral quality of Praxis belongs to the overall spirit of the Williamsburg section of Brooklyn, which recently emerged as an art scene. On one of my visits there, a gallerist asked me what I thought of the scene and I told him I found it "lite," not intending that as a criticism. "We want to remain children," he told me. The artists there could not have been nicer, and this seems generally the feeling evoked by Biennial 2002. It is the least confrontational Biennial of recent years.

There is, for example, not much by way of nudity, though that is integral to the performances of the remarkable artist Zhang Huan, which stands at the opposite end of the spectrum from Praxis. Zhang Huan was expelled from China in 1998. His work fuses certain Asiatic disciplines laced with appropriations from various Western avant-gardes. In each of his performances, Zhang Huan's shaved head and bare, wiry body is put through trials in which, like a saint or shaman, the performer displays his indifference to injury. His nakedness becomes a universal emblem of human vulnerability. There is a remarkable, even stunning, poetry in these performances, and they feel in fact like religious ordeals, like fasting or mortification, undertaken for the larger welfare. I have seen the film of an amazing performance, Dream of the Dragon, in which Zhang Huan is carried by assistants into the performance space on a large forked branch of a tree, like an improvised cross. The assistants cover his body with a kind of soup they coat with flour. A number of leashed family dogs are then allowed to lick this off with sometimes snarling canine voracity.

The performances of William Pope.L, which involve great physical and, I imagine, psychological stress, stand to Zhang Huan's as West stands to East. His crawl pieces, of which he has done perhaps forty since 1978, perform social struggle, as he puts it. His contribution to Biennial 2002, titled The Great White Way, will involve a twenty-two-mile crawl up Broadway, from the Statue of Liberty to the Bronx, and will take five years. In a film excerpt, Pope.L is seen in a padded Superman suit and ski hat, a skateboard strapped to his back, negotiating a segment of the crawl. Sometimes he uses the skateboard as a dolly, but that seems hardly less strenuous than actual crawling. Pope.L is African-American, and somehow one feels that crawling up the Great White Way has to be seen as a symbolic as well as an actual struggle. But it also has the aura of certain ritual enactments that require worshipers to climb some sacred stairway on their knees, or to achieve a required pilgrimage by crawling great distances to a shrine.

Since foot-washing, which is one of Praxis's actions, is widely recognized as a gesture of humility as well as hospitality in many religious cultures, the three performance pieces bear out one of Rinder's observations that a great many artists today are interested in religious subjects. He and I participated in a conversation organized by Simona Vendrame, the editor of Tema Celeste, and published in that magazine under the title New York, November 8, 2001. We were to discuss the impact of September 11 on American art. With few exceptions, the art in Biennial 2002 was selected before the horror, though it is inevitable that it colors how we look at the exhibits.

In a wonderful departure, five commissioned Biennial works are on view in Central Park, including an assemblage of sculptures in darkly patinated bronze by Kiki Smith, of harpies and sirens. These figures have human heads on birds' bodies, and as they are exhibited near the Central Park zoo, they suggest evolutionary possibilities that were never realized. When I saw pictures of them, however, I could not help thinking they memorialized those who threw themselves out of the upper windows of the World Trade Center rather than endure incineration. I had read that one of the nearby schoolchildren pointed to the falling bodies and said, "Look, the birds are on fire!"

I don't really yet know what effect on art September 11 actually had, and it might not be obvious even when one sees it. The artist Audrey Flack, whose work is in the Biennial, told me that as soon as she could get away from the television screen, she wanted only to paint fishing boats at Montauk. A good bit of what Rinder has selected could as easily as not have been done in response to the terrible events, but he said that he had sensed some sort of change taking place in artists' attitudes well before September 11: "What I was finding over and over again was artists saying things to me like 'Well, to be honest, what I'm really doing is searching for the truth' or 'What matters the most to me is to make the most honest statement I possibly can.'" I don't think one can easily tell from looking at the art that it embodies these virtues, any more than one could tell from Flack's watercolors that they constituted acts of healing for her. But that is what they mean and are.

One consequence of art's having taken the direction it has is that there is not always a lot to be gained from what one sees without benefit of a fair amount of explanation. Biennial 2002 has been very generous in supplying interpretive help. Some people have complained that the wall labels go too far in inflecting the way one is supposed to react to the work, but I am grateful for any help I can get; I found the wall texts, like the catalogue, indispensable. And beyond that, you can hear what the artists thought they were doing by listening to recorded comments on the rented electronic guides. I cannot see enough of the work of Kim Sooja, a Korean artist who works with traditional fabrics from her culture. But her statements contribute to the metaphysics of fabric--to what Kierkegaard calls the meaning of the cloth--and are worth thinking about in their own right.

You will encounter Kim Sooja's Deductive Object, consisting of Korean bedcovers placed over tables at the zoo cafe in Central Park, just north of Kiki Smith's mythological animals and just south of a towering steel tree by Roxy Paine. Since Central Park has been opened up to temporary exhibitions, I would like to urge a longstanding agenda of my own. I cannot think of anything better capable of raising the spirits of New York than installing a beautiful projected piece by Christo and Jeanne-Claude, which, as always with their work, will not cost the city a nickel. They envision a series of tall gates, posted at regular intervals all along the main walkway of the park. Hanging from each will be saffron-colored strips of cloth that will float above us as we follow the path for as long as we care to--an undulating roof, since the strips are just long enough to cover the distance between the gates. The whole world will look with exaltation upon this work, which will express the same spirituality and truth that today's artists, if Lawrence Rinder is right, have aspired to in their work. And billions of dollars will flow into our economy as they pilgrim to our city.

I think the art world is going to be the way it is now for a very long time, even if it is strictly unimaginable how artworks themselves will look in 2004. Meanwhile, I think well of Biennial 2002, though I can have written of only a few of the 113 artists that make it up. You'll have to find your own way, like the artists themselves. Take my word that it is worth the effort. That's the best Biennial Criticism is able do in the present state of things.

Filmmakers in sub-Saharan Africa tend to divide their attention between city life today and village life once upon a time. This rule has its exceptions, of course; but if you're searching for an African film that truly overcomes the split, deftly merging the contemporary with the folkloric, I doubt you'll find

anything more ingenious than Joseph Gaï Ramaka's retelling of Carmen. Set along the coast of modern-day Dakar, this Karmen Geï drapes current Senegalese costumes upon the now-mythic figures of Mérimée and Bizet, puts old-style songs and African pop into their mouths and has its characters dance till they threaten to burst the frame.

The film's American distributor, California Newsreel, suggests that Karmen Geï is Africa's first movie musical--that is, an all-singing, all-dancing story, rather than a story with song and dance added on. If so, that breakthrough would count as another major achievement for Ramaka. But nothing can matter in any Carmen without Carmen herself; and so I propose that Ramaka's true claim to fame is to have put Djeïnaba Diop Gaï on the screen.

Practically the first thing you see of her--the first thing you see at all in Karmen Geï--is the heart-stopping vision of her two thighs slapping together, while a full battery of drummers pounds away. We discover Karmen in the sand-covered arena of a prison courtyard, where she is dancing so exuberantly, lustily, violently that you'd think this was a bullring and she'd just trampled the matador; and at this point, she hasn't even risen from her seat. Wait till she gets up and really starts to move, shaking and swerving and swiveling a body that's all curves and pure muscle, topped by a hairdo that rises like a mantilla and then spills down in ass-length braids. A rebel, an outlaw, a force of nature, an irresistible object of desire: Gaï's Karmen embodies all of these, and embodies them in motion. The only part of her that seems fixed is her smile, shining in unshakable confidence from just above an out-thrust chin.

Is it just the memory of other Carmens that brings a bullring to mind? Not at all. There really is a contest going on in this opening scene, and Karmen is winning it, effortlessly. She is dancing, before a full assembly of the jail's female prisoners, in an attempt to seduce the warden, Angélique (Stéphanie Biddle). Pensive and lighter-skinned than Karmen, dressed in a khaki uniform with her hair pulled back tight, Angélique yields to her prisoner's invitation to dance and soon after is stretched out in bed, sated, while Karmen dashes through the hallways and out to freedom.

From that rousing start, Ramaka goes on to rethink Carmen in ways that vary from plausible to very, very clever. It's no surprise that the Don José figure (Magaye Niang) is a police officer; the twist is that Karmen snares him by breaking into his wedding, denouncing all of respectable Senegalese society and challenging his bride-to-be to a dance contest. The chief smuggler (Thierno Ndiaye Dos) is a courtly older man who keeps the lighthouse; and Escamillo, the only person in the movie big enough to look Karmen in the eye, is a pop singer, played with smooth assurance by pop star El Hadj Ndiaye.

Ramaka's best invention, though, is Angélique, a previously unknown character who is both a lovesick, uniformed miscreant and a doomed woman--that is, a merger of Don José and Carmen. By adding her to the plot, the film gives Karmen someone worth dying for. The details of how she arrives at that death are a little muddled--the direction is elliptical at best, herky-jerky at worst--but thanks to Angélique's presence in the story, the climax feels more tender than usual, and more deliberate. Karmen shows up for her final scene decked out in a red sheath, as if to insure the blood won't spoil her dress.

Karmen Geï has recently been shown in the eighth New York African Film Festival, at Lincoln Center's Walter Reade Theater. It is now having a two-week run downtown at Film Forum.

The title of Fabián Bielinsky's briskly intriguing Nine Queens would seem to refer to a sheet of rare stamps--or, rather, to a forgery of the stamps, which two Buenos Aires con artists hope to sell to a rich businessman. But then, the businessman is himself a crook, the con artists don't actually know one another and the sale just might involve real stamps. You begin to see how complicated things can be in this movie; and I haven't yet mentioned the sister.

The action, which stretches across one long day, begins in the convenience store of a gas station, where fresh-faced Juan (Gastón Pauls) draws the attention of Marcos (Ricardo Darín), an older, more aggressive swindler. Teamed up impromptu, just for the day, the two stumble into the con of a lifetime when Marcos's beautiful, prim, angry sister (Leticia Brédice) summons them to the luxury hotel where she works. She just happens to need Marcos to cart away one of his ailing buddies; and the buddy just happens to know of a guest who might buy some stamps.

No, nothing is as it seems. But Bielinsky's storytelling is so adept, his pace so fleet, his actors so much in love with every nuance of their dishonesty that you will probably laugh with delight, even as you're being dealt a losing hand of three-card monte.

And if you want social relevance, Nine Queens will give you that, too. As if Juan (or was it Marcos?) had scripted the whole country, this release swept the critics' awards for 2001 just in time for Argentina's economy to crash. Enjoy!

I hadn't intended to review this last film; but since it's become a critical success, here goes:

The Piano Teacher is a pan-European remake of Whatever Happened to Baby Jane?, with French stars Isabelle Huppert and Annie Girardot playing the sacred-monster roles and Austrian director Michael Haneke fastidiously avoiding the camp humor that alone could have saved the movie. Set in Vienna and cast (except for the leads) with German-speaking actors, whose lips flop like dying fish around their dubbed French syllables, The Piano Teacher is a combination of immaculately composed shots and solemnly absurd dialogue, much of it about the music of Franz Schubert. "That note is the sound of conscience, hammering at the complacency of the bourgeoisie." Sure it is. Add a sequence in which Huppert humps Girardot (her own mother!) in the bed they share, throw in an extended sex scene where the characters grandly ignore any risk of interruption (though they're grappling in a public toilet), and you've got a movie that ought to have made classical music dirty again.

But to judge from critics' reactions, Schubert remains the touchstone of respectability, and The Piano Teacher is somehow to be taken seriously.

The aura of high-mindedness that cloaks the action (at least for some viewers) emanates mostly from Huppert. No matter what her character stoops to--doggie posture, for the most part--Huppert seems never to lower herself. She maintains her dignity because she is being brave. She is acting. She is allowing herself to be shown as sexually abject before an athletic younger man, Benoît Magimel, who has a cleft chin and peekaboo blond hair. Huppert has been similarly abject in recent years, in Benoît Jacquot's The School of Flesh, for example. I wonder what hope other women may nurture for themselves after 40, when this wealthy, celebrated, greatly accomplished and famously beautiful woman has no better prospects. I know we're expected to give prizes to Huppert for such ostentatious self-abnegation. (Last year, at Cannes, she collected a big award.) But what pleasure are we supposed to get from seeing the character humiliated?

A dishonest pleasure, I'd say; the same kind that's proposed in The Piano Teacher's now-notorious scene of genital mutilation. The meaning of the scene, for those who are pleased to give it one, is of course transgressive, subversive and otherwise big word-like. See how (women) (the Viennese) (the middle class) (fill in the blank) are repressed, how they turn against themselves, how they make themselves and everyone around them suffer. Then again, if you subtract all that guff about the complacent bourgeoisie, maybe the scene means nothing more than "Ew, gross!"

I have admired Haneke's films in the past, beginning with the antiseptically grim The Seventh Continent and going on to the tough, much-maligned Benny's Video. When Haneke has proposed that clean, affluent, educated people may do horrible things, I have agreed, as of course I must, accepting what would have been a mere platitude for the sake of the films' clear vision and genuine sense of dread. But as I watched Huppert's preposterous impersonation of a music teacher, I began to wonder if Haneke knows that characters can be something other than horrid.

The dynamics of Schubert's music represent emotional "anarchy," says Huppert at one point, in a pronouncement that would get a pedagogue sacked from any self-respecting conservatory. Listen to Rudolf Serkin play the great B-flat piano sonata, varying his touch with every breath, and you will hear not anarchy but imagination. It's the quality most lacking in The Piano Teacher--followed closely by warmth, humor, realism and purpose.

Fun at Home: Nation readers will want to know that Zeitgeist Video has just brought out a DVD of Mark Achbar and Peter Wintonick's fine documentary Manufacturing Consent: Noam Chomsky and the Media. All the original fun is there, plus added features such as Chomsky's own commentary on the picture. The film is now ten years old. You will probably find it's more to the point than ever.

Perhaps time is our invention
To make things seem to move
Like the uncovering tail of the blue jay
As it lights its feet on the wet
Trembling wood.

Perhaps the seasons are really not
More than a single space with walls inside, disconnected
While fall and winter, and spring
Which we always anticipate, are only
Expansions of our own longings.

Perhaps there is only the now
Neither age nor youth, not even the vertigo of memories stilettoed
Except wounded into this present second
Shorter than the birth of a cell, or the nest dropped
With the sun and the rain always out together.

This center is absolute, it needs no endlessness
For heaven or hell. Or for creation, our own illusion of ourselves.
The minor variations we unfold are all the same
Inherently permutating at once
Repeating one design. Obscure. Lit at the edges of our time.

There are those opposed to the use of cloning technology to create human embryos for stem-cell research whose concerns emanate from commitments to social justice. One of their arguments runs as follows: The idea driving this medical research is that by creating an embryo through cloning, we can produce embryonic stem cells that are a perfect genetic match for a patient. All that is required to conduct the cloning is a skin cell from which to extract the patient's DNA and...a human egg.

Where, cry out the social justice advocates, are we going to get all these eggs for all these patients? Do the math, they suggest: 17 million American diabetics, needing anywhere from 10 to 100 eggs each, since the cloning technology is far from efficient...and even if you can pull that off, Christopher Reeve is still not walking, Michael J. Fox and Janet Reno still tremble and Ronald Reagan still doesn't remember who Ronald Reagan was. The social justice folk maintain that the billions of eggs required for embryonic stem cell therapies for the millions of Americans suffering from chronic and degenerative diseases will be obtained through exploitation of poor women in this country and the world over. Surplus value will take on an even more nefarious meaning.

Still, the early results from embryonic stem-cell therapy in mice are so dramatic that not to pursue this medical research is recognized as morally obscene and just plain stupid. At the University of California, Dr. Hans Keirstead was able to implant neurological tissue derived from embryonic stem cells in a mouse with partial spinal cord injury so that after eight weeks, the mouse had regained most of its ability to walk and, of major significance to the quarter-million Americans suffering from this tragic condition, had also regained bladder and bowel control. Yet, the question remains, where are we going to get all those eggs?

A call to Stanford University's Paul Berg, a Nobel laureate who has been testifying to Congress on behalf of embryonic stem-cell research, helps elucidate the answer: When it comes to the research, he says, the quantity required may not be a problem. But if the desired therapeutic potential of embryonic stem cells is fully realized, the need for eggs will be great and could short-circuit the availability of these therapies. But a solution to that may be possible, Berg insists. If research is carried out that identifies the biochemicals in the egg directing the genetic material to develop into an embryo, then we could extract and fractionate those biochemicals and insert them into any skin cell, for example, for use in the cloning process. Voilà! A skin cell becomes an egg, and skin cells are plentiful.

The immediate enthusiasm for this breakthrough scientific idea, which could help Reeve walk again while simultaneously obviating the motive for an exploitative human egg market, is quickly tempered by the full realization of what Berg has explained: When we acquire the ability to use any cell as an egg, we will have removed another obstacle to achieving complete control over human reproduction. Admittedly, complete control over the production of reproduction will require a womb for gestation--but that ultimately should prove to be just another biochemical matter for extraction and fractionation.

This, then, is how it goes in biotechnology, the essential dynamic that simultaneously gives rise to medical hope and moral vertigo. Each step forward produces a new problem, the solution to which demands further control over the biological mechanism known as a human being. But this somehow impinges on human beings or some portion of ourselves that we value. To deal with the attendant moral quandaries, a method is found to isolate and duplicate the underlying molecular process. The moral quandary has thus been replaced by an extracorporeal biochemical process, no longer strictly identified as human, and therefore a process that no one can reasonably value apart from its use. The problem, as bioethicist Eric Juengst puts it, is that we could thereby successfully cope with every moral dilemma posed by biotechnology and still end up with a society none of us would wish to live in. For Francis Fukuyama, this is Our Posthuman Future, as he has titled his new book on the subject.

Fukuyama's most famous previous theoretical foray was to declare, in 1989, an end to history, whereby a capitalist liberal democratic structure represented the final and most satisfying endpoint for the human species, permitting the widest expression of its creative energies while best controlling its destructive tendencies. He imagined that ultimately, with the universal acceptance of this regime, the relativist impasse of modern thought would in a sense resolve itself.

But thirteen years after the end of history, Fukuyama has second thoughts. He's discovered that there is no end of history as long as there is no end of science and technology. With the rapidly developing ability of the biological sciences to identify and then alter the genetic structure of organisms, including humans, he fears the essence of the species is up for grabs. Since capitalist liberal democratic structures serve the needs of human nature as it has evolved, interference by the bio-engineers with this human nature threatens to bring the end of history to an end.

The aim of Our Posthuman Future is "to argue that [Aldous] Huxley was right," Fukuyama announces early on, referring to Huxley's 1932 vision of a Brave New World. Multiple meanings are intended by Fukuyama: The industrialization of all phases of reproduction. The genetic engineering of the individuals produced by that process, thereby predetermining their lives. The tyrannical control of this population through neurochemical intervention, making subservience experientially pleasurable. Fukuyama cites specific contemporary or projected parallels to Huxley's Hatchery and Conditioning Center, Social Predestination Room and soma. In Fukuyama's terms, the stakes in these developments are nothing less than human nature itself.

The first of the book's three parts lays out the case that the biotechnologically driven shift to a posthuman era is already discernible and describes some of the potential consequences. Prozac and Ritalin are precursors to the genomically smart psychotropic weapons of the near future. Through these drugs, which energize depressed girls and calm hyperactive boys, we are being "gently nudged toward that androgynous median personality, self-satisfied and socially compliant, that is the current politically correct outcome in American society." Standardization of the personality is under way. This is the area to watch, Fukuyama asserts, because virtually everything that the popular imagination envisions genetic engineering accomplishing is much more likely to be accomplished sooner through neuropharmacology.

Increased life spans and genetic engineering also offer mostly dystopic horizons, whereby gerontocracies take power over societies whose main purpose has become the precision breeding of their progeny. The ancient instincts for hierarchical status and dominance are still the most powerful forces shaping this new world born from biotechnology. Since, as Fukuyama sees it, science does not necessarily lead to the equality of respect for all human beings demanded by liberal egalitarianism, the newest discoveries will serve the oldest drives. We are launched on a genetic arms race.

But be warned: We may not arrive in that new world through some dramatic struggle in which we put up a fight. Rather, the losses to our humanity may occur so subtly that we might "emerge on the other side of a great divide between human and posthuman history and not even see that the watershed had been breached because we lost sight of what that [human] essence was."

If this terrible event is to be prevented, then the human essence, which Fukuyama correlates with human nature itself, must be identified and kept inviolable. But what is that line to be drawn around "human nature" and to which we can all adhere so that we might reap the benefits of biotechnology while preventing the nightmare scenarios from ever coming to pass?

The entire world today wants the answer to this. Fukuyama promises to deliver it. But despite the clarity with which he announces his mission, the author advises his readers, "Those not inclined to more theoretical discussions of politics may choose to skip over some of the chapters here." Yet these are the very chapters containing the answer we all seek in order to tame the biotechnology beast! This, then, signals that we are entering dangerous ground, and we will need to bear with the author's own means of revealing his great discovery, which may be skipped over at our own peril.

In this heart of the book, titled "Being Human," Fukuyama first seeks to restore human nature as the source of our rights, our morality and our dignity. In particular, he wishes to rescue all these dimensions from the positivist and utilitarian liberal philosophers who, closely allied with the scientific community, have dominated the debate over biotechnology. According to the author, these philosophers assign rights everywhere and emphasize the individual as the source of moral concern. In doing so, they put humankind and its collective life at risk before the juggernaut of biotechnology. John Rawls and Ronald Dworkin, among others, have elevated individual autonomy over inherently meaningful life plans, claims Fukuyama, who then questions whether moral freedom as it is currently understood is such a good thing for most people, let alone the single most important human good.

Rather than our individual autonomy or moral freedom, Fukuyama wishes that we would attend to the logic of human history, which is ultimately driven by the priorities that exist among natural human desires, propensities and behaviors. Since he wishes us to shift ground to the logic of the inherent and the natural, he must finally define that core composing human nature:

The definition of the term human nature I will use here is the following: human nature is the sum of the behavior and characteristics that are typical of the human species, arising from genetic rather than environmental factors.

Later he will refine this further to the innate species-typical forms of cognition, and species-typical emotional responses to cognition. What he is really after is not just that which is typical of our species but that which is unique to human beings. Only then will we know what needs the greatest safeguarding. After hanging fire while reviewing the candidates for this irreducible, unique core to be defended, including consciousness and the most important quality of a human being, feelings, Fukuyama finally spills the beans:

What is it that we want to protect from any future advances in biotechnology? The answer is, we want to protect the full range of our complex, evolved natures against attempts at self-modification. We do not want to disrupt either the unity or the continuity of human nature, and thereby the human rights that are based on it.

So, where are we? It would seem we have gone full circle. Human nature is defined by...human nature! To the extent that it is capable of being located in our material bodies, it is all that arises from our genetics. Any attempt at greater precision is a violation of our unity or continuity--and threatens to expose the author's empty hand. Through such sophistry, Fukuyama wishes to assert mastery over any biotechnological innovation that he considers threatening, since he can now arbitrarily choose when it is disruptive of the unity or continuity of the human nature arising from our genetics. Even a heritable cancer could qualify for protection under Fukuyama's rubric for that which is to be defended from biotechnological intervention.

Indeed, there are those agreeing with Fukuyama's view of the biological bases of human social life who draw opposite conclusions about human bioengineering, viewing it as humanity's last best hope.

The remainder of the book is a potpourri of tactical suggestions (embedded in rhetoric cloned from Fukuyama's mentor in these matters, bioethicist Leon Kass) of which biotechnologies should be controlled, and of the need for both national and international bodies and systems to do so, if such control is to be effective. That, in the end, may be the most surprising aspect of the book. All this fervid philosophizing in reaction to fears about a Brave New World, fervently working toward the radical conclusion that what is needed is...regulation. Although obviously recognition of the need for regulation might well be experienced as a radical trauma by someone who has previously placed an overabundance of faith in the market.

But one would be foolish to believe that Fukuyama has gone all this distance simply to argue for what he refers to at one point as a more nuanced regulatory approach. In his most public engagement with biotechnology thus far, he has endorsed, written and testified to Congress on behalf of a bill that will not only ban human reproductive cloning but also ban nonreproductive cloning for stem-cell research. The legislation he supports would also make any doctor who utilizes or prescribes a treatment developed with cloning technology subject to ten years in prison and a $1 million fine. Under this legislation, then, if a cure or treatment for diabetes or heart failure is created in England that used embryo cloning to harvest stem cells for therapy, US physicians would not be allowed to have access to such treatments for their patients. This is his lesson in how moral freedom is not such a good thing compared with an inherently meaningful life plan. Let the fragile diabetic or spinal cord-injury victim learn the true value of our human nature from their catheterized bladders!

Fukuyama's entire brief depends upon avoiding the consequences of his own logic. Having identified the human essence with our biological human nature, he must evade any further specification or else the particular tissues, cells or molecules would be subject to further discussion and analysis as to whether or not they represent the human essence. Rather than discussion, we should trade in our autonomy and moral freedom for his protections. By the close of the book, any moral qualms on his part fall entirely by the wayside. Fukuyama is perhaps aware that he has failed to make his case except to those ready to believe. The book culminates in a final paragraph that is nothing less than a temper tantrum:

We do not have to accept any of these future worlds under a false banner of liberty, be it that of unlimited reproductive rights or of unfettered scientific inquiry. We do not have to regard ourselves as slaves to inevitable technological progress when that progress does not serve human ends. True freedom means the freedom of political communities to protect the values they hold most dear...

Nice rhetoric until we recall the values of the types of political regimes to which moral freedom and science must be sacrificed. While Fukuyama rails against the Brave New World, he takes the side of Huxley's World Controller, who explains, "Truth's a menace, science is a public danger...That's why we so carefully limit the scope of its researches."

There is an alternative to the fear that human nature must be inviolable because human nature cannot be trusted. We have seen imperious dictates against science and moral freedom delivered by philosophers before. In the recent past, we have evidence of very similar ideas in very similar language issuing from the philosopher whom Fukuyama draws upon for the epigraph beginning the first chapter of his book, Martin Heidegger. In the 1930s Professor Heidegger wanted science to serve the German essence, and it did. Now Professor Fukuyama wants science, and all of us, to serve the human essence, which he equates with his version of sociobiology infused with German romantic holism. Once more, we witness someone who would stop tyranny by imposing a tyranny of his own. Since Francis Fukuyama now sits on the President's Council on Bioethics, we should be grateful for the warning.

"Thirty years from now the big university campuses will be relics," business "guru" Peter Drucker proclaimed in Forbes five years ago. "It took more than 200 years for the printed book to create the modern school. It won't take nearly that long for the [next] big change." Historian David Noble echoes Drucker's prophecies but awaits the promised land with considerably less enthusiasm. "A dismal new era of higher education has dawned," he writes in Digital Diploma Mills. "In future years we will look upon the wired remains of our once great democratic higher education system and wonder how we let it happen."

Most readers of this magazine will side with Noble in this implicit debate over the future of higher education. They will rightly applaud his forceful call for the "preservation and extension of affordable, accessible, quality education for everyone" and his spirited resistance to "the commercialization and corporatization of higher education." Not surprisingly, many college faculty members have already cheered Noble's critique of the "automation of higher education." Although Noble himself is famously resistant to computer technology, the essays that make up this book have been widely circulated on the Internet through e-mail, listservs and web-based journals. Indeed, it would be hard to come up with a better example of the fulfillment of the promise of the Internet as a disseminator of critical ideas and a forum for democratic dialogue than the circulation and discussion of Noble's writings on higher education and technology.

Noble performed an invaluable service in publishing online the original articles upon which this book is largely based. They helped initiate a broad debate about the value of information technology in higher education, about the spread of distance education and about the commercialization of universities. Such questions badly need to be asked if we are to maintain our universities as vital democratic institutions. But while the original essays were powerful provocations and polemics, the book itself is a disappointing and limited guide to current debates over the future of the university.

One problem is that the book has a dated quality, since the essays are reproduced largely as they were first circulated online starting in October 1997 (except for some minor editorial changes and the addition of a brief chapter on Army online education efforts). In those four-plus years, we have watched the rise and fall of a whole set of digital learning ventures that go unmentioned here. Thus, Noble warns ominously early in the book that "Columbia [University] has now become party to an agreement with yet another company that intends to peddle its core arts and science courses." But only in a tacked-on paragraph in the next to last chapter do we learn the name of the company, Fathom, which was launched two years ago, and of its very limited success in "peddling" those courses, despite Columbia president George Rupp's promise that it would become "the premier knowledge portal on the Internet." We similarly learn that the Western Governors' Virtual University "enrolled only 10 people" when it opened "this fall" (which probably means 1998, when Noble wrote the original article) but not that the current enrollment, as of February 2002, is 2,500. For the most part, the evidence that Noble presents is highly selective and anecdotal, and there are annoyingly few footnotes to allow checking of sources or quotes.

The appearance of these essays with almost no revision from their initial serial publication on the Internet also helps to explain why Noble's arguments often sound contradictory. On page 36, for example, he may flatly assert that "a dismal new era of higher education has dawned"; but just twenty-four pages later, we learn that "the tide had turned" and the "the bloom is off the rose." Later, he reverses course on the same page, first warning that "one university after another is either setting up its own for-profit online subsidiary or otherwise working with Street-wise collaborators to trade on its brand name in soliciting investors," but then acknowledging (quoting a reporter) that administrators have realized "that putting programs online doesn't necessarily bring riches." When Noble writes that "far sooner than most observers might have imagined, the juggernaut of online education appeared to stall," he must have himself in mind, two chapters earlier. Often, Noble is reflecting the great hysteria about online education that swept through the academy in the late 1990s. At other times (particularly when the prose has been lightly revised), he indicates the sober second thoughts that have more recently emerged, especially following the dot-com stock market crash in early 2000.

In the end, one is provided remarkably few facts in Digital Diploma Mills about the state of distance education, commercialization or the actual impact of technology in higher education. How many students are studying online? Which courses and degrees are most likely to appear online? How many commercial companies are involved in online education? To what degree have faculty employed computer technology in their teaching? What has been the impact on student learning? Which universities have changed their intellectual property policies in response to digital developments? One searches in vain in Noble's book for answers, or even for a summary of the best evidence currently available.

Moreover, Noble undercuts his own case with hyperbole and by failing to provide evidence to support his charges. For example, most readers of his book will not realize that online distance education still represents a tiny proportion of college courses taken in the United States--probably less than 5 percent. Noble sweepingly maintains, "Study after study seemed to confirm that computer-based instruction reduces performance levels." But he doesn't cite which studies. He also writes, "Recent surveys of the instructional use of information technology in higher education clearly indicate that there have been no significant gains in pedagogical enhancement." Oddly, here Noble picks up the rhetoric of distance-education advocates who argue that there is "no significant difference" in learning outcomes between distance and in-person classes.

Many commentators have pointed out Noble's own resistance to computer technology. He refuses to use e-mail and has his students hand-write their papers. Surely, there is no reason to criticize Noble for this personal choice (though one feels sorry for his students). Noble himself responds defensively to such criticisms in the book's introduction: "A critic of technological development is no more 'anti-technology' than a movie critic is 'anti-movie.'" Yes, we do not expect movie critics to love all movies, but we do expect them to go to the movies. Many intelligent and thoughtful people don't own television sets, but none of them are likely to become the next TV critic for the New York Times. Thus, Noble's refusal to use new technology, even in limited ways, makes him a less than able guide to what is actually happening in technology and education.

Certainly, Noble's book offers little evidence of engagement with recent developments in the instructional technology field. One resulting distortion is that some readers will think that online distance education is the most important educational use of computer technology. Actually, while very few faculty teach online courses, most have integrated new technology into their regular courses--more than three-fifths make use of e-mail; more than two-fifths use web resources, according to a 2000 campus computing survey. And few of these faculty members can be characterized, as Noble does in his usual broad-brush style, as "techno-zealots who simply view computers as the panacea for everything, because they like to play with them."

Indeed, contrary to Noble's suggestion, some of the most thoughtful and balanced criticisms of the uses of technology in education have come from those most involved with its application in the classroom. Take, for example, Randy Bass, a professor of English at Georgetown University, who leads the Visible Knowledge Project (http://crossroads.georgetown.edu/vkp), a five-year effort to investigate closely whether technology improves student learning. Bass has vigorously argued that technological tools must be used as "engines of inquiry," not "engines of productivity." Or Andrew Feenberg, a San Diego State University distance-education pioneer as well as a philosopher and disciple of Herbert Marcuse, who has insisted that educational technology "be shaped by educational dialogue rather than the production-oriented logic of automation," and that such "a dialogic approach to online education...could be a factor making for fundamental social change."

One would have no way of knowing from Noble's book that the conventional wisdom of even distance-education enthusiasts is now that cost savings are unlikely, or that most educational technology advocates, many of them faculty members, see their goal as enhancing student learning and teacher-student dialogue. Noble, in fact, never acknowledges that the push to use computer technology in the classroom now emanates at least as much from faculty members interested in using these tools to improve their teaching as it does from profit-seeking administrators and private investors.

Noble does worry a great deal about the impact of commercialization and commodification on our universities--a much more serious threat than that posed by instructional technology. But here, too, the book provides an incomplete picture. Much of Noble's book is devoted to savaging large public and private universities--especially UCLA, which is the subject of three chapters--for jumping on the high-technology and distance-education bandwagons. Yet at least as important a story is the emergence of freestanding, for-profit educational institutions, which see online courses as a key part of their expansion strategy. For example, while most people think of Stanley Kaplan as a test preparation operation, it is actually a subsidiary of the billion-dollar Washington Post media conglomerate and owns a chain of forty-one undergraduate colleges and enrolls more than 11,000 students in a variety of online programs, ranging from paralegal training to full legal degrees at its Concord Law School, which advertises itself as "the nation's only entirely online law school." This for-profit sector is growing rapidly and becoming increasingly concentrated in a smaller number of corporate hands. The fast-growing University of Phoenix is now the largest private university in the United States, with more than 100,000 students and almost one-third in online programs, which are growing more than twice as fast as its brick-and-mortar operation. Despite a generally declining stock market, the price of the tracking stock for the University of Phoenix's online operation has increased more than 80 percent in the past year.

As the Chronicle of Higher Education reported last year, "consolidation...is sweeping the growing for-profit sector of higher education," fueled by rising stock prices in these companies. This past winter, for example, Education Management Corporation, with 28,000 students, acquired Argosy Education Group and its 5,000 students. The threat posed by these for-profit operations is rooted in their ability to raise money for expansion through Wall Street ("Wall Street," jokes the University of Phoenix's John Sperling, "is our endowment") and by diminishing public support for second-tier state universities and community colleges (the institutions from which for-profits are most likely to draw new students). Yet, except for an offhand reference to Phoenix, Digital Diploma Mills says nothing about these publicly traded higher-education companies. And these for-profit schools are actually only a small part of the more important and much broader for-profit educational "sector," which is also largely ignored by Noble and includes hundreds of vendors of different products and services, and whose size is now in the hundreds of billions of dollars--what Morgan Stanley Dean Witter calls, without blushing, an "addressable market opportunity at the dawn of a new paradigm."

A strong cautionary tale is provided by Noble, that of the involvement of UCLA's extension division with a commercial company called Onlinelearning.net--the most informative chapter in the book. He shows how some UCLA administrators as early as 1993 greedily embraced a vision of riches to be made in the online marketing of the college's extension courses. UCLA upper management apparently bought the fanciful projections of their commercial partners that the online venture would generate $50 million per year within five years, a profit level that quickly plummeted below $1 million annually. But Noble conflates the UCLA online-extension debacle with a more benign effort by the UCLA College of Letters and Sciences, beginning in 1997, to require all instructors to post their course syllabuses on the web. He seems unwilling to draw distinctions between the venal and scandalous actions of top UCLA administrators and the sometimes ham-handed efforts of other administrators to get UCLA faculty to enhance their classes by developing course websites, a fairly common educational practice and a useful convenience for students. Three-fifths of UCLA students surveyed said that the websites had increased interactions with instructors, and social science faculty recently gave the website initiative a mostly positive evaluation.

Sounding an "early alarm" so that faculty members can undertake "defensive preparation and the envisioning of alternatives" is how Noble explains his purpose in writing Digital Diploma Mills. But will faculty be well armed if they are unaware of the actual landscape they are traversing? In the end, Noble leaves us only with a deep and abiding suspicion of both technology and capitalism. His analysis of technology and education does echo Marx's critique of capitalism, with its evocation of concepts like commodification, alienation, exchange and labor theories of value. But unlike Marx, who produced a critical analysis of the exploitative nature of early capitalist production without outright rejection of the technology that made industrialization possible, Noble cannot manage the same feat.

In the current political climate, Noble's undifferentiated suspicion of technology hinders us more than it helps us. Are we prepared to follow him in his suspicion of any use of technology in higher education? Are faculty members willing to abjure e-mail in communicating with their students and colleagues? Are instructors at small colleges with limited library collections prepared to tell their students not to use the 7 million online items in the Library of Congress's American Memory collection? Are they ready to say to students with physical disabilities that limit their ability to attend on-campus classes or conduct library research that they can't participate in higher education? Are faculty at schools with working adults who struggle to commute to campus prepared to insist that all course materials be handed directly to students rather than making some of it available to their students online?

Similarly, what lines are we prepared to draw with respect to commercialization of higher education within the capitalist society in which we live? Are faculty willing to abandon publishing their textbooks with large media conglomerates and forgo having their books sold through nationwide bookstore chains? Are they prepared to say to working-class students who view higher education as the route to upward mobility that they cannot take courses that help them in the job market?

Noble's answer to most of these questions would undoubtedly be yes, insisting, as he does, that anything less than the "genuine interpersonal interaction," face to face, undermines the sanctity of the essential teacher-student relationship. In a March 2000 Chronicle of Higher Education online dialogue about his critique of technology in education, Noble complained that no one had offered "compelling evidence of a pedagogical advantage" in online instruction. (He pristinely refused to join online, and had a Chronicle reporter type in his answers relayed over the phone.) A student at UCLA, who had unexpectedly taken an online course, noted in her contribution to the Q&A that because she tended to be "shy and reserved," e-mail and online discussion groups allowed her to speak more freely to her instructor, and that she thought she retained more information in the online course than in her traditional face-to-face classes at UCLA. Noble rejected the student's conclusion that the online course had helped her find her voice, arguing that writing was "in reality not a solution, but an avoidance of the difficulty." "Speaking eloquently, persuasively, passionately," he concluded, "is essential to citizenship in a democracy." Putting aside the insensitivity of Noble's reply, his position, as Andrew Feenberg points out in Transforming Technology: A Critical Theory Revisited, is reminiscent of Plato's fear that writing (the cutting-edge instructional technology in the ancient world) would replace spoken discourse in classical Greece, thus destroying the student-teacher relationship. (Ironically, as Feenberg also notes, "Plato used a written text as the vehicle for his critique of writing, setting a precedent" for current-day critics of educational technology like Noble who have circulated their works on the Internet.)

The conservative stance of opposing all change--no technology, no new modes of instruction--is appealing because it keeps us from any possible complicity with changes that undercut existing faculty rights and privileges. But opposition to all technology means that we are unable to support "open source" technological innovations (including putting course materials online free) that constitute a promising area of resistance to global marketization. And it makes it impossible to work for protections that might be needed in a new environment. Finally, it leaves unchanged the growing inequality between full-time and part-time faculty that has redefined labor relations in the contemporary university--the real scandal of the higher-education workplace. Without challenging the dramatic differences in wages and workloads of full professors and adjunct instructors, faculty rejection of educational technology begins to remind us of the narrow privileges that craft workers fought to maintain in the early decades of industrial capitalism at the expense of the unskilled workers flooding into their workplaces.

We prefer to work from a more pragmatic and realistic stance that asks concretely about the benefits and costs of both new technology and new educational arrangements to students, faculty (full- and part-time) and the larger society. Among other things, that means that academic freedom and intellectual property must be protected in the online environment. And the faculty being asked to experiment with new technology need to be provided with adequate support and rewards for their (ad)ventures. As the astute technology commentator Phil Agre wrote when he first circulated Noble's work on the Internet, "the point is neither to embrace or reject technology but to really think through and analyze...the opportunities that technology presents for more fully embodying the values of a democratic society in the institutions of higher education."

Blogs

Check out new releases from Led Zeppelin, Elvis Presley and Peter Frampton.

November 30, 2012

Some details in Skyfall are just too silly to accept.

November 29, 2012

The twenty-something photographer talks about the changing face of journalism and giving women around the world a voice.

November 28, 2012

The filmmaker challenges the accepted narrative that dropping the bomb(s) was the right thing to do.

November 26, 2012

From Barbara Ehrenreich to Paul Wellstone, changemakers fighting for equality are the real American heroes. 

October 15, 2012

During his Congressional run in 1960, the late writer and Nation contributor fired back at a less-than-friendly newspaper publisher.

October 8, 2012

Nader’s The Seventeen Solutions is a subversive primer that should be read in every high school civics class.

October 4, 2012

The End of Men’s prediction of an American matriarchy fails to define what that would actually entail.

October 2, 2012

The long-time Dylan historian sounds off on the Bard’s thirty-fifth studio album.

October 2, 2012

Sixty-seven years later, a look at one of the great, unknown tales of Hollywood "censorship."

September 27, 2012