That success is in the eye of the unsuccessful would seem to be the great unspoken dilemma dogging critics asked to consider the work of the rich and famous author and inspirational speaker Malcolm Gladwell. No matter how well intentioned or intellectually honest their attempts to assess his ideas, the subtext of Gladwell’s perceived success, and its implications for their own aspirations in the competitive thought-generation business, obscures their judgment and sinks their morale. Nearly a decade has passed since the New York Times dryly summarized Gladwell’s first book, The Tipping Point: How Little Things Can Make a Big Difference (2000), as “a study of social epidemics, otherwise known as fads,” and yet, each Sunday, it still taunts perusers of the paperback nonfiction rankings, where it currently sits in sixth place. Gladwell may be merely “a slickster trickster” who “markets marketing” (as James Wolcott put it), or a “clever idea packager” who “cannot conceal the fatuousness of his core conclusions” (science writer John Horgan); he might even be an “idiot” (Leon Wieseltier). But one thing is clear: Gladwell is no fad. He is a brand, a guru, a fixture at New York publishing parties and in the spiels of literary agents hoping to steer writers toward concepts that will strike publishers as “Gladwellian.”
By 2005, when Gladwell’s second book, Blink: The Power of Thinking Without Thinking, made its debut on the Times bestseller list in the No. 2 spot, the assumption had gradually taken hold that despite Gladwell’s bona fides as a New Yorker staff writer, his success was on some level a triumph of style over substance, verisimilitude over reality, ease over rigor. It did not hurt that the closest Blink came to a governing thesis was the foggy notion that too many ideas can spoil an operation.
A gaggle of irate critics, seeking to right this injustice, came charging, pens bravely brandished, only to watch themselves sink into the quicksand of Gladwell’s infuriatingly memorable–“sticky,” in Gladwellese–concepts and jargon. “Why does spending a weekend with Mr. Gladwell’s best-selling books…lead to unhappiness and a pathological fixation on writing in rhetorical questions?” wailed Tom Scocca in the New York Observer. So discombobulated was Scocca by this critical game of pin-the-tail that when he finally stopped spinning, he stuck his sticker straight into a tautology. His solution to the Gladwell question was to posit that Gladwell’s style had simply gotten too Gladwellian. “The problem with the Malcolm Gladwell Piece,” as he put it, “is that it always seems to contain phrases like ‘the problem with the Malcolm Gladwell Piece.'”
That Gladwell’s most recent blockbuster monograph, Outliers: The Story of Success, actually purported to be about success only accelerated the vicious cycle of maddening self-reference, begging as it did a critical Gladwellian case study of Gladwell’s own “success.” The book’s premise can be distilled to a single sentence: success is the result of many variables, most of which lie outside the control of a particular individual. Gladwell illustrates this point through various anecdotes and case studies that teach us a great many things we already know. For instance, hard work and education are important. Also: Culture Matters. Know Thyself. Practice Makes Perfect. Or, in the words of the Observer‘s Alexandra Jacobs, Outliers is about “how super-achievers like–well, like Malcolm Gladwell!–get where they are.” And sure enough, in The New York Times Book Review, David Leonhardt took the bait, writing a brief alternative history of Gladwell’s life crediting said “success” to his parents’ professions. His mother, a psychotherapist, and his father, a mathematician, “pointed young Malcolm toward the behavioral sciences, whose popularity would explode in the 1990s,” wrote Leonhardt; in addition, his mother, who “just happened to be a writer on the side,” taught him “‘that there is beauty in saying something clearly and simply.'”
But in examining Gladwell’s success concurrently with his prescriptions for achievement, even his harshest reviewers damned themselves with faint criticism. When Michiko Kakutani dismissed Outliers for employing the patented Gladwell “shake-and-bake” recipe “in such a clumsy manner that it italicizes the weaknesses of his methodology,” she still granted him a coherent method; when The Economist embraced the book’s “engaging” and “intriguing” case studies while wryly enclosing the overarching “big idea” in quotation marks, it overlooked Gladwell’s refusal to engage meaningfully with the world of ideas at all.
The Economist was astute to observe that the sheer obviousness of Outliers‘ core ideas, which were “unlikely to take even the least reflective reader by surprise,” marked a departure from The Tipping Point. But when the magazine described The Tipping Point‘s chief attraction as its title concept’s capacity to lend “the power of apparent inevitability to almost any argument,” it failed to mention that the concept was central to Outliers as well–this despite that the purported aim of Outliers was to remind readers that “success,” for most of us, is anything but inevitable. Such are the contradictions that seem to riddle not just Gladwell’s thinking but the thinking on Gladwell’s thinking, and perhaps even the thinking on thinking on that, and it is precisely these slippery but substantive contradictions that have allowed Gladwell to tout his revolutionary “big ideas” without couching them in anything so mundane as a logical, well-supported or otherwise sound argument. In this failing, he is not unique among either media mavens or the intelligentsia, but he is, perhaps, outstanding. “I don’t really think of myself as an outlier,” Gladwell told New York magazine late last year. “At the end of the day, I’m just a journalist.”
In 1996 The New Yorker hired Gladwell as a staff writer after first publishing an essay he wrote for the magazine’s “Black in America” special issue. He had spent the previous nine years at the Washington Post, where he covered health policy and science. In his rumination on the nuances of prejudice (which is the basis of the last chapter of Outliers), Gladwell defenestrated the fallacy that Canadians are less racist than Americans with an anecdote describing a coffee date with an old college acquaintance. The man launched into a tirade about the threat Toronto was facing from Jamaican immigrants, Jamaica being the outpost where all the most “troublesome and obstreperous” slaves had been sent. “I have told that story many times since, usually as a joke, because it was funny in an appalling way,” wrote the half-Jamaican Gladwell. “I tell the story that way because otherwise it is too painful.”
“Somebody,” he concludes, “always has to be the nigger.”
That Gladwell would rarely again end a story with such a downer of a line is in evidence in his new book, What the Dog Saw, a collection of his articles from his tenure at The New Yorker. The collection provides an archive of just a fraction of the stories he’s written since the mid-’90s, when, under the employ of the magazine’s famously buzz-obsessed former editor Tina Brown, Gladwell began studiously scrubbing his sentences of the mildew of the old, liberating his readers from references to anything that might dirty undiluted all-newness with the dourness of precedent. Gladwell focused his sights on the more vacuous anxieties of the heirs and heiresses of American affluence. In 1999 he wrote a story called “Running From Ritalin,” about the wildly overprescribed drug for attention-deficit disorder, which he claimed was merely the modern answer to a widespread dopamine deficiency that previous generations had treated with cigarettes and cocaine, “a drug,” he explains helpfully, “that people thought would help them master the complexity and the competitive pressures of the world around them.” Soon after, Gladwell would tackle college admissions, shopping, parenting, standardized testing, corporate culture and transformative household inventions of the twentieth century, often all in the space of a few dozen column inches, and in the template that he had fashioned in the Ritalin story: a cheerful, conversational voice deployed in a perfectly paced dopamine prose that had the palliative effect of nullifying whatever concerns readers might have about this product or that problem.
Gladwell promised readers mastery of the complex and competitive world around them, if only they would accept the facile conclusions he extrapolated from the findings of the many endearingly eccentric, iconoclastic scholars and researchers who were busy applying the scientific method to the investigation of everyday living. These scientists tended to share a universal message: contrary to our latent anxieties about modern life, everything is all right–or can be, with a few minor psychopharmacological tweaks–so come on, get happy! From a stammering “retail anthropologist” we learn that shoppers are not nearly so slavish and easily manipulated as the chain stores believe them to be. From a “heroically counterintuitive” historian of loopholes we learn to appreciate, rather than resent, tax cheats, smut peddlers and sources close to the investigation who exploit the letter of the law to undermine its spirit. From a series of surprisingly sincere marketing executives we gain a nuanced appreciation for both the fullness (“amplitude,” in industry parlance) of the taste of ketchup and the subtle subversiveness of early Clairol commercials. “In writing the history of women in the postwar era,” Gladwell wonders in this last piece, “did we forget something important? Did we leave out the hair?”
Gladwell’s protagonists are generally intelligent but ordinary folks who have imbued their work with a passionate practicality. Their laboratories are courtrooms and high-concept shopping malls, office parks and African villages, but whatever their locale, they are always buried in data, endless stacks and reams and massive videotape libraries full of tens of thousands of hours of footage documenting their findings, their desks buckling under thick piles of “carefully annotated tracking sheets.” With this abundance of evidence they espouse theories that Gladwell depicts either as regrettably naïve or courageously counterintuitive, depending on whether he is debunking conventional wisdom or advancing a hitherto unknown experimental truth. He takes pains to skewer, for instance, the delusion that the Central Park jogger was saved by a “miracle” and the misconception that the Challenger explosion revealed a hideously corrupt species of neglect at NASA. Particularly vexing to Gladwell and his data marshals are overblown health hazards scaring the consumptive populace off such marvels as breast implants, estrogen therapy, newfangled birth control pills and products containing the fat substitute Olestra, the famed and feared “stool loosening” side effect of which Gladwell expends many sentences likening to that of bran cereal.
A recurring straw man for Gladwell is misguided evangelism, generally the kind that rallies around fringe causes, though his aversion to strident moralism usually keeps him from fixing on a villain. A notable exception is the late diet guru Dr. Atkins, upon whom he loosed his most withering scorn in a comprehensive takedown of the diet industry published in 1998. Otherwise Gladwell seems to regard his intellectual foes as somewhat pathetic figures. He wants to love, for instance, Dr. Susan Love–the charismatic but suspiciously shrill critic of estrogen therapy he profiled in 1997–but the data just don’t support her claim that the treatment dramatically increases women’s risk of developing breast cancer. Estrogen, however, does cause breast cancer, we learn three years later in “John Rock’s Error,” the cautionary tale of another wayward evangelist, the Roman Catholic doctor who helped develop the birth control pill. Rock lobbied the Catholic Church to lift its ban on the pill and, having failed, eventually lost his faith in God and drank himself to death.
In a 1998 article called “Do Parents Matter?” Gladwell championed the findings of one Judith Rich Harris, a “fragile, elfin” and grandmotherly editor of child-psychology textbooks who had published a groundbreaking study purporting to show that parents are rarely to blame for screwing up their kids. Harris had formulated her hypothesis while editing a book on juvenile delinquency that offhandedly credited the motivation for such behavior to the desire to be more like adults. “Adolescents aren’t trying to be like adults–they are trying to contrast themselves with adults,” she explained to Gladwell. But like a fickle teenager, Gladwell would casually shrug off the wisdom he had gleaned from Harris in a piece that appeared several months later. Here he contrasts the television show Beverly Hills, 90210, which “played to the universal desire of adolescents to be grownups,” with its spinoff, Melrose Place:
“Melrose” was the opposite. It started with a group of adults–doctors, advertising executives, fashion designers–and dared to have them behave as foolishly and as naively as adolescents. Most of them lived in the same apartment building, where they fought and drank and wore really tight outfits and slept together in every conceivable permutation. They were all dumb, and the higher they rose in the outside world the dumber they got when they came home to Melrose Place. In the mid-nineteen-nineties, when a generation of Americans reached adulthood and suddenly realized that they didn’t want to be there, the inverted world of Melrose was a wonderfully soothing place. Here, after all, was a show that ostensibly depicted sophisticated grownup society, and every viewer was smarter than the people on the screen.
But for all this vapidity, Gladwell finds something to admire in the melodrama: restraint.
The wonderful thing about “Melrose Place” was that just when you thought that the show was about to make some self-consciously postmodern commentary on, say, the relationship between art and life, it had the courage to take the easy way out and go for the laugh.
The publication of Gladwell’s first book, The Tipping Point, proved his courage to be of a similar character. The Tipping Point was named for an epidemiological phenomenon that he had introduced to the public when he covered healthcare policy for the Post. Healthcare reporting is a beat that notoriously leaves journalists disillusioned by the destructive influence of money and markets on the public welfare; in Gladwell’s case, it provided the central metaphor for a book that applied the observations of health officials to the business of “want creation,” otherwise known as branding. Gladwell describes the genesis of the book in detail in a “Q&A With Malcolm” on Gladwell.com:
The word “Tipping Point”…comes from the world of epidemiology. It’s the name given to that moment in an epidemic when a virus reaches critical mass. It’s the boiling point. It’s the moment on the graph when the line starts to shoot straight upwards. AIDS tipped in 1982, when it went from a rare disease affecting a few gay men to a worldwide epidemic. Crime in New York City tipped in the mid 1990’s, when the murder rate suddenly plummeted. When I heard that phrase for the first time I remember thinking–wow. What if everything has a Tipping Point? Wouldn’t it be cool to try and look for Tipping Points in business, or in social policy, or in advertising or in any number of other nonmedical areas?
The product of this endeavor was what Gladwell calls “an intellectual adventure story,” a genre-crossing book that whipped up a “little bit of sociology, a little of psychology and a little bit of history,” tossed in some epidemiology and “examples from the worlds of business and education and fashion and media,” and hocked the resulting mishmash of soft social science and hard cases to help readers make “sense of the world, because I’m not sure that the world always makes as much sense to us as we would hope.”
The temptation to try to calculate The Tipping Point‘s own tipping point seems thus far to have been resisted, but there is no doubt that the book reached a great many Mavens, Salesmen and Connectors and eventually became a phenomenal success. To what was that success attributable? Surely encoding the principles of bestsellingness and infection by word of mouth in the book’s DNA did some of the work, but there were other contributing factors, not the least of which was what has become Gladwell’s signature style, which projects the expertise of a scientist and the easy helpfulness of the guy who delivers the local television station’s “news you can use” segment at 6:25.
In searching for an anecdote or image with which to convey the ultra-absorbency of Gladwell’s book as compared with that of his soggier-sentenced peers, I found myself remembering a story Gladwell wrote in 2001 about the technology of diapers. In this story, Gladwell reported that “those in the trade” refer to the waste that diapers are engineered to retain as “the insult,” and this image seems to me as useful as any for thinking about Gladwell’s success. His masterful maneuver was to engineer a style that artfully conceals “the insult,” honing it in his articles before finally unleashing it in book form with The Tipping Point.
What made The Tipping Point remarkable was not the diagrams or axioms or anything it includes but rather what it left out: that is, any discussion of the real risks of business at a moment when its sexiest sector, technology, was increasingly uncertain about how it was going to survive once it had burned through its remaining seed money. Instead, Gladwell celebrated the way certain personality types can, given a hospitable set of circumstances, or “context,” conspire with extraneous forces to profoundly alter human behavior–without ever dwelling on how this might be a bad thing or bothering to provide a clear definition of the word “context.” In the “Q&A,” Gladwell says he hopes readers will use the “new set of tools”–“brain software”–he provides them to create “‘positive’ epidemics” of their own. Dr. Atkins is nowhere to be found, nor is Susan Love; instead, he populates his book with a nonthreatening cast of folksy, relatable characters–behavioral psychologists, petty criminals and criminologists, effusive socialites and seminarians, Big Bird and Peter Jennings–and tells their stories in a manner so adamantly engaging that it reads suspiciously as if it had been focus-grouped. In conversation with Tom Scocca, Gladwell characterized his style as one that screams “Please, please, don’t leave me,” and indeed his stories often seem designed to do nothing more than to keep people reading.
This is nowhere more apparent than in What the Dog Saw. The book is mostly old news, with the exception of a preface in which Gladwell attempts to justify the methodology behind his pieces, an explanation, it appears, that is also meant as a rejoinder to all those aforementioned heckling critics who have failed, fundamentally, to comprehend Gladwell’s project. Complaining that his greatest frustration as a writer has been the angry reader who believes that Gladwell wants him to “buy” his argument, Gladwell asserts,
Good writing does not succeed or fail on the strength of its ability to persuade. Not the kind of writing that you’ll find in this book, anyway. It succeeds or fails on the strength of its ability to engage you, to make you think, to give you a glimpse into someone else’s head–even if in the end you conclude that someone else’s head is not a place you’d really like to be.
One could quibble with the assertion that a writer’s obligation to persuade begins and ends with keeping the reader reading, not in order to convince him of certain conclusions but merely to enable him to satisfy that basic human impulse to explore and temporarily inhabit other minds. But the critic’s natural persuasion is to attempt to inhabit the mind of a writer, to evaluate how satisfying the stay was, even in the probable case that one wouldn’t want to be lodged there permanently. And one thing that frustrates this reader of Gladwell is his obvious aversion to giving us any privileged access to his mind, encouraging us instead to inhabit more fully the consciousnesses of dogs and their whisperers, when one would hope that his mind is an infinitely more interesting place to be. But as Gladwell tells us, “self-consciousness is the enemy of ‘interestingness,'” and so perhaps it is that impulse to protect the self from criticism that has so hampered his work, which he chronically undersells even as his books outsell his every rival.
“A book, I was taught long ago in English class, is a living and breathing document that grows richer with each new reading. But I never quite believed that until I wrote The Tipping Point,” he gushed in the afterword to the 2002 edition, citing the “conferences and retreats and sales meetings” where he had mingled with his readers. “In a world dominated by isolation and immunity, understanding these principles of word of mouth is more important than ever.”
Gladwell has said that of all the people he has interviewed, he most identifies with Nassim Nicholas Taleb, the polymath former derivatives trader turned “risk management” guru whom he profiled in April 2002, after Taleb published his breakthrough bestseller Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Taleb, now “distinguished professor of risk engineering” at NYU, writes chatty, nonlinear nonfiction books that are invariably described as intellectually “provocative…in the tradition of” Malcolm Gladwell. Taleb’s follow-up, The Black Swan: The Impact of the Highly Improbable (2007), examined a phenomenon of the same genus as Gladwell’s outlier: a “black swan,” according to Taleb, is an unlikely but “consequential event” with profound transformative implications.
But if Taleb shares something in content and style with Gladwell, his books have a markedly different tone. Taleb considers himself a connoisseur of the “epistemic arrogance of the human race,” and, unlike Gladwell, he rather conspicuously relishes the chance to hurl “the insult”–in all its freshness–at those aspiring bigwigs misguidedly combing his books for investment strategies. In Fooled by Randomness, he mocks his own youthful distrust of philosophy, which he considered “an activity reserved for those who were not well versed in quantitative methods and other productive things,” and then describes turning to it later in life, after realizing he was “generally repelled by the wealthy, generally because of the attitude of epic heroism that usually accompanies rapid enrichment.” He recounts his gradual recognition that
$10 million earned through Russian roulette does not have the same value as $10 million earned through the diligent and artful practice of dentistry. They are the same, can buy the same goods, except that one’s dependence on randomness is greater than the other’s…. Deep down, I cannot help but consider them as qualitatively different.
In The Black Swan, Taleb elaborated on what Gladwell has called Taleb’s “heretical” idea. Cataloging industries on a spectrum between two poles, Extremistan and Mediocristan, where finance in the era of securitization and the art world in the age of digital reproduction hovered near Extremistan and dentists, mechanics and community organizers were still largely anchored in Mediocristan, Taleb closed in on a flaw in the logic of modern capitalism that he felt to be gravely dangerous. The world’s financial and consumer superpower had shifted its economy radically toward the “scalable” activities of Extremistan–where the same number of labor hours could result in one sale or 1 million–while maintaining Mediocristan’s solidly “average” talent base as well as the base’s conventional sense that “success” is largely a function of craftsmanship, experience and innate talent. He did not blame books like The Tipping Point for encouraging readers to believe they could game their fates in the face of Extremistan’s governing randomness, but he could have.
Gladwell’s profile explained none of Taleb’s ideas in detail, but its timing was impeccable: in 2002 Wall Street was reeling from the “blow-up” of Enron, the Twin Towers and the accounting industry, and Taleb, who claimed his methods had insulated his fund from such a fate, cut a compelling figure. “We cannot blow up, we can only bleed to death,” Taleb told Gladwell. Gladwell likened Fooled by Randomness to Martin Luther’s ninety-five theses, solemnly concluding:
This kind of caution does not seem heroic, of course. It seems like the joyless prudence of the accountant and the Sunday-school teacher…. We associate the willingness to risk great failure–and the ability to climb back from catastrophe–with courage. But in this we are wrong. That is the lesson of Nassim Taleb…and also the lesson of our volatile times.
Mirthless as he may appear to Gladwell, Taleb is a millionaire who gets to say he told you so. Why has no one said the same for Gladwell? After all, it’s true. Perhaps the reason is partly the unyielding chicken-egg binariness of his patented “To intuit, or to counter-intuit?” method, which proved insufficient to explain the increasingly complex topics he approached after Taleb, most notably the implosion of Enron. In “The Talent Myth,” published in July 2002, he declared the firm a casualty of the article’s titular fallacy, which was then in vogue among many elite companies. He used as evidence the career of Lou Pai, who found himself kicked upstairs so many times that when he left Enron in 2001, he was CEO of one of its largest subsidiaries: “Because Pai had ‘talent,’ he was given new opportunities, and when he failed at those new opportunities he was given still more opportunities…because he had ‘talent.'”
This certainly was likely to interest anyone who bought Taleb’s contention, in Fooled by Randomness, that the most vexing problem of modern finance was its practitioners’ tendency to conflate success and talent. According to their logic, failure equals talent, too! But if both were true, surely an industry rife with Lou Pais was not a little corrupt?
Alas, when presented with the chance to implicate any self-aggrandizing Extremistanis in the crime of succumbing to their context, Gladwell never missed an opportunity to miss an opportunity. The lesson of Enron, he wrote in “The Talent Myth,” was that smart people might be overrated. Five years later, in “Open Secrets,” he offered another view, challenging readers to compare Enron with Watergate. Whereas Watergate had been a puzzle, a scandal with an obvious narrator and cast of perpetrators, Enron was a mystery–a scandal too complex to comprehend. “Enron’s downfall has been documented so extensively that it is easy to overlook how peculiar it was,” he wrote. This assertion might ring less false were it followed by an elaboration of how it differed from other “mysteries”: the Enron crooks were caught and prosecuted. One noteworthy event Gladwell fails to mention is the panicked plea of Enron CEO Ken Lay for a federal bailout on grounds of the “systemic risk” the firm’s collapse would pose, a plea rejected by Treasury Secretary Paul O’Neill. All of this appears to muddy Gladwell’s earlier assertion that Enron and its management consultants at McKinsey
believe in stars, because they don’t believe in systems. In a way, that’s understandable, because our lives are so obviously enriched by individual brilliance. Groups don’t write great novels, and a committee didn’t come up with the theory of relativity.
But stars do believe in systems–at least when the system is down several trillion dollars.
Now let’s skip ahead to “Group Think,” an article Gladwell published in December 2002, just a few months after “The Talent Myth,” by which time Gladwell had fixed his lens on some new constellations, the stars of the television show Saturday Night Live:
We are inclined to think that genuine innovators are loners, that they do not need the social reinforcement the rest of us crave. But that’s not how it works, whether it’s television comedy or, for that matter, the more exalted realms of art and politics and ideas. In his book “The Sociology of Philosophies,” Randall Collins finds in all of known history only three major thinkers who appeared on the scene by themselves: the first-century Taoist metaphysician Wang Ch’ung, the fourteenth-century Zen mystic Bassui Tokusho, and the fourteenth-century Arabic philosopher Ibn Khaldun. Everyone else who mattered was part of a movement, a school, a band of followers and disciples and mentors and rivals and friends who saw each other all the time and had long arguments over coffee and slept with one another’s spouses.
Stars! They’re just like us. Which is to say, every time Gladwell begins to close in on a conclusion of real meaning or intellectual impact, he clicks his heels and returns to the mental Melrose Place of quippy clichés. What’s more, he apparently has no problem espousing the whole-truthness of two antithetical clichés–the innateness of genius and “The Power of Context” (as Gladwell had christened this truism in The Tipping Point) at almost simultaneous moments in time. Reduced further, depending on Gladwell’s narrative needs, genius is either nature or nurture, and he has cheerily eaten his cake, wrapped it up neatly in a take-away box and left us wondering where the crumbs disappeared to.
It may seem obvious to some that these are false dichotomies; neither half is ever true to the exclusion of the other. But that is the rub: there are a great many book buyers determined to hedge their bets in precisely this Gladwellian mode. Depending on the situation, they want to believe in the sovereign power of either nature or nurture–to convince themselves that anyone can be a success but also that should one be so unfortunate as to fail, that failure was predestined by an accident of fate. This is the contradictory “story of success” that runs through Gladwell’s articles, The Tipping Point and Outliers. The “power of apparent inevitability,” as The Economist termed it, is a narrative that his hungriest readers can use to explain any turn their lives might take, and it was precisely these readers who flooded Gladwell’s e-mail inbox with raves about how The Tipping Point had empowered them to take control of their lives and “contexts.”
By the time Gladwell produced a sequel to The Tipping Point, Blink, his preference for light vignettes featuring plucky heroes over grimmer fare was proving its own insult. In Blink‘s afterword, he describes the book as “a journey into the wonders of our unconscious” but one that should not “be confused with the unconscious described by Sigmund Freud, which was a dark and murky place filled with desires and memories and fantasies that were too disturbing for us to think about consciously.” Instead, Blink plumbs an unconscious realm that is surprisingly hospitable. Gladwell makes the case that because human existence is entirely too rich and nuanced to be reducible to data or logic (and by extension, to arguments or allegations), reason and reflex blend over time to yield snap decisions that are often better than the best-laid plans.
If nothing else, it was a counterintuitive moment for Gladwell to come out in favor of intuition: by 2005 the citizenry was turning against the warmongering gut instincts of the commander in chief. In Iraq, the number of casualties continued to mount daily despite Defense Secretary Donald Rumsfeld’s entreaty for the public to sit back and wait for the war to reach its tipping point. Blink does include a chapter on the war, in which Gladwell reveals that Rumsfeld’s disastrous battle plan had been roundly defeated by a retired Marine general named Paul Van Riper in an elaborate simulation game in 2002 (the Pentagon then ran another test, in which it sabotaged Van Riper by installing a disloyal deputy and disarming the bulk of his equipment). But despite his proximity to these proceedings, Rumsfeld is never mentioned by name in Blink. Nor, for that matter, is Bush.
It seems odd that Gladwell would write an entire chapter about the war without ever mentioning two of its main protagonists, almost as if he might believe his readers were paying so little attention that they could forget whose hunches about a failed battle plan had gotten them into this mess. And perhaps he did. In The New Republic, Richard Posner jeered that Blink “is written like a book intended for people who do not read books.” But that’s not quite right: Blink appears to have been written not for people who don’t read books but for people who read only books that spend years on the bestseller lists, books you can talk about with your boss or buy in bulk for the marketing department.
Gladwell has documented his love-hate relationship with such books: he has gone on the record about his disdain for the diet-book industry, but he has also described his admiration for Rick Warren’s The Purpose-Driven Life. And if Taleb is Gladwell’s hero, his villain is, as Taleb’s was, the mendacious and self-aggrandizing CEO. In 2001 Gladwell ridiculed Michael Eisner, Sumner Redstone and Jack Welch for ripping off Lee Iacocca’s formula for the corporate memoir wherein modest, homespun beginnings and “gruff, no-nonsense” mentors lay the foundations for a self-made man to make his way to the corner office. Outliers is Gladwell’s corrective to this genre. In it, we learn that Bill Gates, Steve Jobs and sundry other titans of Silicon Valley were all born into affluent households in the mid-1950s and otherwise benefited from a variety of cultural and circumstantial factors that yielded some of the world’s most successful people.
And so once again we find Gladwell muckraking in the trenches of banal cliché and thereby reinforcing said cliché–and, more insidiously, banality itself. In Outliers, as in Blink, he appears to assume that the unexamined life is the only sort his readers could be living, though lessons with titles like “Demographic Luck” and “The Importance of Being Jewish” suggest that he may have downgraded his expectation of who his readers are from the less savvy to the truly oblivious. Outliers contains a few new terms and morsels of trivia: the 10,000-Hour Rule describes the number of practice hours one must put in to attain true genius; we also learn that fourteen of the seventy-five individuals on Gladwell’s list of the “richest people in human history” were Americans born between 1831 and 1840. (Cleopatra is No. 21.) But for the most part, the book’s first section, “Opportunity,” contains nothing that will enlighten anyone who has given even a small fraction of 10,000 hours of thought to the word’s meaning.
But it is when Gladwell ventures from the home of the brave to foreign cultures–primarily the Asian ones we’ve voted most likely to succeed–that Outliers begins to rely on clichés that are not only inane but, in some cases, comically offensive. In a section on the crash of a Korean Air passenger jet, Gladwell blames cultural deference for enabling numerous preventable in-flight disasters on the carrier–and credits the airline’s ability to overcome its rigid “cultural legacy” for steering Korean Air back toward safety. We also travel to China’s rice paddies, where the Chinese long ago learned–at least in the south, the region where rice is farmed–teamwork, self-discipline and the appreciation of complex but “meaningful” work that has enabled them to dominate global manufacturing. And in the most convoluted section of Outliers, Gladwell repurposes an argument from a book called The Number Sense that posits that Asians are good at math because in Chinese, the numerals one through nine are single-syllable, so brief to think or speak that Chinese children can fit a great many of them in their heads in any given time span, which gives them a self-perpetuating cognitive edge from the age they learn to count. From here, Gladwell explains that these tiny numerals are ordered in a system simpler than ours (the number eleven, for example, is expressed as ten-one) and that this ease and logic, combined with the discipline they’ve learned diligently tending their rice paddies, is what makes not just Chinese students of mathematics but also Japanese and Korean students superior to their Western counterparts.
For now let’s ignore Gladwell’s agronomical observations as well as the fact that the Number Sense argument can’t apply to Japanese or Korean, in which several of the numbers from one through nine are polysyllabic. Let’s instead turn a Gladwellian eye to a sixteenth-century Italian missionary named Matteo Ricci. One of the first Westerners to travel to China, in 1583, Ricci found a nation that was not, it might surprise you to learn, very good at math. His twenty-seven-year stay in China is described in detail in The Memory Palace of Matteo Ricci (1984), a book by historian Jonathan Spence. Under previous dynasties the Chinese had made significant advances in mathematics, but during the European Renaissance China’s Ming emperors–whose primary goal was to reassert Chinese cultural supremacy after a hundred years of humiliating Mongol rule–prioritized literature and art over scientific discovery, a bias they reinforced through a rigorous examination system that governed advancement in civil service. The turning or tipping point or whatever came when Ricci learned Chinese. While all the conventional, data-supported wisdom tells us that languages are best learned young, when the dramatic burdens of meaning and experience are less likely to clog up the process, Ricci had a counteradvantage: learning Chinese as what Gladwell would term a “late bloomer,” at an age (31) by which he had absorbed enough meaning and experience to approach the task with some deliberation. It was a formidable undertaking that required the mastery of thousands of complex characters, some of which contained more than twenty strokes. But the Jesuits of the era were keen on mnemonic devices as a method of keeping their minds active during prayer sessions; while praying they might, for instance, try to focus by visualizing Christ suffering on the cross.
Ricci saw in Chinese characters a built-in framework for remembering meaning, and within a few years he had not only learned the language but had fine-tuned his sensibilities to reflect the cultural norms it articulated, replacing the humble, Buddhist monastic garb he’d donned to fit in with the locals with the extravagant silk robes that marked a learned man. Ricci soon began work on a book in Chinese, a study guide for merchants’ sons cramming for the government-service exam. His innovation was to introduce these aspiring bureaucrats to memory tricks that would enhance that very susceptibility to memorization that is inherent in the language. In other words, learning to write Chinese requires so much sustained, focused concentration and memorization that any skill that relies heavily on these qualities is by definition apt to come easier to those who write Chinese, which, since Ricci’s time, many more Chinese have learned to do. Put in The Tipping Point‘s terms, Ricci’s success as both a Maven and a Connector was due to his keen appreciation of the “Stickiness Factor” of the Chinese language. Math was a footnote: along with drafting one of the first maps of the world seen in China, Ricci translated Euclid’s Elements of Geometry into Chinese, as well as various other texts in ethics and cosmography, all with the goal of introducing the Chinese to the Christian God who would save their souls.
Ricci hid his true motives for two decades, the Jesuits having agreed to keep quiet about Christianity until they had the emperor’s explicit permission to stay in China. Once that was achieved, Ricci began proselytizing a brand of faith from which most mentions of the Holy Trinity, the Virgin Mary and the saints were stricken, and the crucifixion had been scrapped altogether. What Hugh Trevor-Roper called his “gentlemanly deism” generally emphasized the study of math and science over ritual devotion, and he convinced many Confucian scholars that Christians and Confucians believed in the same God. At one point Ricci had a dream in which he met a man who asked him, “Is this the way you wander about this vast kingdom, imagining that you can uproot an age-old religion and replace it with a new one?” Ricci concluded that the man was actually God, who was encouraging him to persevere.
It is still a challenge to pinpoint the nature of Ricci’s core beliefs (he spent his dying days on an increasingly zealous mission to stamp out the influence of Buddhist “idol-worshippers” on the virtuous Confucians). It has been suggested that the stress and secrecy of his work left him in poor mental health, but his efforts, along with those of many other Jesuits keenly sensitive to context, eventually converted hundreds of thousands of Chinese to Christianity.
Can revisiting Matteo Ricci’s memory palace help illuminate the modern-day mind of Malcolm Gladwell? I wonder if Gladwell sees himself as an office-park missionary dispatched by the church of academe to tour the lecture circuit and convert the leaders of corporate America with “good news” from the ivory tower, its gospel made easy and ecumenical by all those helpful exercises and sticky new terms.
In that case, perhaps Gladwell’s intellectual compromises are neither commercial nor unintentional but rather a necessary outgrowth of his higher calling: to explore the secret workings of the world and impart the resulting data to its self-appointed stewards, the titans of industry. This conclusion, if true, may resolve many of the most puzzling incongruities riddling Gladwell’s articles: his continued defense of the pharmaceutical industry even as he advocates for single-payer healthcare; his refusal to indict the financial sector’s rigged “star system” as the engine of corruption that it is; the meticulous bleaching of his own prose so that he’s whitewashed out any real context, any framework in which wars and economic collapses can actually be understood as wars and economic collapses rather than simulations or malfunctions; his near total avoidance of academic thought that does not base its findings on things observed in labs (with the exception of Carl Jung, whose legacy he reduces to the popularization of personality tests); his coyness about politics; and most memorably, his irritating, unrelenting readability.
A kindred spirit of Gladwell’s hero, Professor Nassim Nicholas Taleb, makes an appearance in Outliers in the form of Chris Langan, another misanthropic genius who seems more amused by the foibles of humanity than eager to assist his inferiors in correcting them. Langan, whose IQ is so high it cannot be quantified, dropped out of Reed College after his first year because his dirt-poor Montana upbringing failed to instill in him the “practical intelligence” to navigate the byzantine bureaucracy of the school’s financial aid office. Langan now lives with his wife, a clinical neuropsychologist he met through the ultrahigh-IQ community, on a horse ranch he bought in rural Missouri, where he spends his days working out a unified model of reality. He also occasionally makes media appearances as a sort of intellectual curiosity; these culminated in a win on the NBC game show 1 vs. 100, where his triumph over the collective intelligence of 100 average Americans earned him $250,000.
Gladwell writes that when he went to visit Langan on his farm, the genius “seemed content” with his lot in life but revealed himself to be otherwise through his conflicted answer to the hypothetical question of whether he would take a professorship at Harvard were one offered to him. “Obviously, as a full professor at Harvard I would count,” Langan said. But he added that, then again,
Harvard is basically a glorified corporation, operating with a profit incentive. That’s what makes it tick. It has an endowment in the billions of dollars. The people running it are not necessarily searching for truth and knowledge. They want to be big shots, and when you accept a paycheck from these people, it is going to come down to what you want to do and what you feel is right versus what the man says you can do to receive another paycheck.
It is this attitude, this commitment to his own outlier status, that makes Langan’s such a “heartbreaking story” in Gladwell’s mind, this despite the fact that to all appearances Langan is a happy man, a success on his own terms, who has said that he has chosen a life of meaning over one of conventional achievement. To this, Gladwell replies:
Even in his discussion of Harvard, it’s as if Langan has no conception of the culture and particulars he’s talking about…. What? One of the main reasons college professors accept a lower paycheck than they could get in private industry is that university life gives them the freedom to do what they want to do and what they feel is right. Langan has Harvard backwards.
Maybe Langan has Harvard backwards, or maybe he is a subscriber to The New Yorker:
The admissions directors at Harvard have always…been diligent about rewarding the children of graduates, or, as they are quaintly called, “legacies.” In the 1985-92 period, for instance, Harvard admitted children of alumni at a rate more than twice that of non-athlete, non-legacy applicants, despite the fact that, on virtually every one of the school’s magical ratings scales, legacies significantly lagged behind their peers. Karabel [Jerome Karabel, a sociologist who wrote The Chosen, a book about university admissions] calls the practice “unmeritocratic at best and profoundly corrupt at worst,” but rewarding customer loyalty is what luxury brands do…. The endless battle over admissions in the United States proceeds on the assumption that some great moral principle is at stake in the matter of whom schools like Harvard choose to let in–that those who are denied admission by the whims of the admissions office have somehow been harmed. If you are sick and a hospital shuts its doors to you, you are harmed. But a selective school is not a hospital, and those it turns away are not sick. Élite schools, like any luxury brand, are an aesthetic experience–an exquisitely constructed fantasy of what it means to belong to an élite–and they have always been mindful of what must be done to maintain that experience.
That is also Gladwell, explaining the Ivy League admissions process in 2005. But Harvard, he seems to have since recognized, is a fantasy seductive enough to unite Christians and Confucians. The pilgrims come to Gladwell seeking salvation in success, and for what it is worth, he gives them the old college try.