Sunday, September 16, 2007

Nature, Nurture, Literature

Oh, the poor books. The poor novels, poems, and fairy tales, endlessly re-stuffed into the hopper of the career-scholarship machine and crapped out of the bottom as worms of pink ground meat. For decades we’ve suffered the exaggerated silliness of deconstruction and postmodernism, whose central doctrine is that “discourse” (cultural stuff) creates reality. Sooner or later it had to elicit a reaction, a counter-swing to some counter-folly. The swing began in the mid-1990s and has been picking up speed lately: Darwinian literary criticism.

Its reasoning goes like this: People evolved. So whatever people do must reflect the forces of evolution. And that must include fiction-writing. So evolutionary biology must be the key to understanding fiction, not to mention everything else about being human. As a reviewer in
Science put it in 2006, “everything human ultimately derives from the evolved body and brain, no matter how much culture and individual consciousness are capable of varying the forms of expression.” [1] The real drivers of literary form must, therefore, be Darwinian forces: mate selection, competition for scarce mates, the production of progeny. “Truth is beauty, beauty truth”—and they both boil down to getting laid.


Notice, in the passage just quoted, the distinction-erasing tendency of “ultimately,” the implicit assumption-mongering of “no matter how much culture and individual consciousness are capable of varying the forms of expression.” Really? No matter
how much? But if we were capable of “varying the forms” a lot, mightn’t we reach a point where the relevance of specific evolutionary processes became small, even zero? Yes, we might.

Darwinian lit crit is an obvious truth inflated into an all-encompassing fallacy. The obvious truth is that certain ground rules are indeed set by biology. It is imperative that babies get made, or there will quickly be no more human race. Biology therefore requires that most of us have an abiding interest in the acts that make babies, and so we do. (Though not all of us, and not all the time.) So does this anciently obvious fact, even with all its more recent Darwinian refinements, tell us anything interesting about literary texts? Why should it? The fallacy of Darwinian lit crit is the same as of all other fad-reductionist schools of literary theory, Marxist, Freudian, deconstructionist, and other, namely, the assumption that bedrock constraints explain details. But constraints don’t explain, they constrain—an entirely different thing. Although everybody and everything in
Pride and Prejudice obeys the law of gravity, Newton has nothing nontrivial to say about our experience of that book. Ditto Darwin.

The Darwinian lit critics are effectively arguing, Look, your brain evolved, so everything your brain does is controlled by evolution. Darwin owns your ass and we own Darwin’s, so bend over. The radical deconstructionists respond, Oh yeah? Well, everything you’re saying right now was
learned. If not for socialization you wouldn’t be speaking a language or reading Darwin or anything. If you’d been raised alone in closet, just you and your DNA and soup from a spigot, you would be a drooling nonperson right now. Everything’s culture, and we own culture, so you bend over. But then a physicist comes in and says: All you fools reside on a clod of dirt running around a ball of gas owned by Sir Isaac Newton and Albert Einstein and some other guys, all of whom I own, so you, etc.

Eventually one gets sick of it, finishes one’s drink, and goes out for a breath of fresh night air.

The Darwinian lit critics leave out a few things. First, the fact that we actually know nothing about what role, if any, storytelling has played in human evolution. Human beings existed for hundreds of thousands of years before inventing writing, so we don’t even know what stories have been told for something like 99% of human history—and even if we knew the stories, that wouldn’t prove a dong-dong thing about how natural selection may or may not have shaped them. Romantic love as a major concern of literature, by the way, only appeared about five or six hundred years ago. There are no hard data at all, none, linking literary content to evolutionary process—only pseudo-Darwinian speculations that some people happen to find congenial.

Second, they cherry-pick. They choose literary works in which “mate selection” figures prominently, then talk about how the characters instantiate Darwinian imperatives of mate selection. They avoid the hard cases. What about
The Lord of the Rings, not exactly an obscure work? Frodo not only doesn’t mate, he doesn’t even flirt. So why are millions of people reading Tolkien instead of how-to dating manuals? Oh, one can always inject more hot air into the theory balloon, cover any case. Frodo saves the world, and what could be more important to evolutionary continuity than saving the world? Yeah, let’s talk the evolution of altruism! But at this point, we can shrug and walk away. Beating Sauron saves hobbit DNA from being fed to orcs, but that’s not why people read Tolkien. A book having the very same plot, but stripped of the atmospherics that are actually the whole point, would not be read at all.

Third, the Darwinian lit critics leave out much of what literature is actually about. Here are a few literary concerns that could only be forced into a Darwinian explanatory mold by implausible twisting: weird atmospheres as in Poe, historical remoteness, love of home places, love of alien places, adventure, religious awe, supernatural dread, nostalgia, love of the inanimate or nonhuman, philosophy, nationalistic feeling, longing for aloneness, longing for good death, unrequited love, exquisite suicide. More could be named. I’m not equally attracted to all these things, but all are common in literature. Consider a book as unpretentious as Edgar Rice Burroughs’s
A Princess of Mars (1917), which has probably sold over a million copies. Sure, John Carter eventually gets into the loincloth of “the incomparable Deja Thoris” and they make some babies. All very Darwinian, except that the mating-game is here only a flimsy convention, a formality, a pretext for the real joy of the book: its particular atmosphere, the feel of Barsoom, of the dry sea-beds, abandoned cities, weird creatures, swordplay. Those who say that this atmosphere is mere bait for a Darwinian trap should have to explain, in convincing detail, why any bait is required at all, if the Darwinian trap is our real interest, and why this bait. If Darwinian strategies are what we really want, why don’t we just write books that talk explicitly about them? Why muck about with dead civilizations and the twin moons of Mars racing across the sky when we could be reading Harlequins? For that matter, why read romances set in a Regency England that never was when we could be studying straight-up, no-nonsense mating manuals set in more contemporary, and therefore more relevant, surroundings?

Because literary desire, even at its most dilute, is not reducible to reproductive strategy.

According to the reviewer in
Nature quoted earlier, Darwinian lit crit is cool because it makes it

easier to explain why people spend so much time reading literature. If someone sits for a long time with a book in his hands, he is probably extracting something interesting from it. In evolutionary currency, ‘something interesting’ relates to reproduction, either directly or indirectly (social competition, for example). [2]

A long, happy essay could be had out of kicking this remarkably loveless and nearsighted remark around until it wore out, but I will close with C. S. Lewis’s alternative account of why we read: [3]

My own eyes are not enough for me, I will see through those of others. Reality, even seen through the eyes of many, is not enough. I will see what others have invented. . . . Literary experience heals the wound, without undermining the privilege, of individuality. . . . [I]n reading great literature I become a thousand men and yet remain myself. Like the night sky in the Greek poem, I see with a myriad eyes, but it is still I who see. Here, as in worship, in love, in moral action, and in knowing, I transcend myself; and am never more myself than when I do.

----------------------------------------------

[1] Harold Fromm, “Reading with Selection in Mind,”
Science 311(2006): 612-613.

[2] Michel Raymond, “Literature Red in Tooth and Claw,”
Nature 435(2005): 28.

[3] C. S. Lewis,
An Experiment in Criticism. Cambridge, UK: Cambridge University Press, 1979, pp. 140-141.

Illustration from Thomas Hardy,
Tess of the D'Urbervilles. New York: Harper & Brothers, 1893.

Thursday, July 26, 2007

Bleaching the Controversy

Since The Chronicle of Higher Education was so unwise as to turn down the following, it appears here. The proof of its fit to the alleged theme of this blog is left as an exercise for the reader.
------------------------------------------------------------------

The third and last volume of
The Collected Letters of C. S. Lewis (HarperSanFrancisco) was released on January 9, 2007 after long delay. Despite its bulk it will probably sell briskly, as anything with Lewis’s name on it tends to. Readers braving the finished set’s 8,333 footnotes and 99 biographical appendices might get the notion that the editor, Walter Hooper, has included every relevant detail about the people in C. S. Lewis’s life, plus some—but they would be wrong.


C. S. (“Jack”) Lewis’s brother, Warren Lewis, is central to any account of Lewis’s life. The pair toughed out a lonely boyhood together in Northern Ireland and were housemates in middle and old age. But the relationship was tragic, too. In
Collected Letters III, Mr. Hooper describes how Warren “would periodically disappear to Ireland on drinking binges, often absenting himself when Jack needed most help with Mrs Moore,” the elderly woman whom the Lewis brothers cared for. Later we are reminded that thanks to Warren’s boozing, “Jack was left to cope as best he could.” All this belongs in the book and none of it is new; Mr. Hooper and other biographers have often described Warren Lewis’s alcoholism. In the introduction to an earlier collection of Lewis letters (They Stand Together, 1979), Mr. Hooper even found occasion to tell how he, Hooper, once saved a badly soused Warren from soiling his pants in the lavatory of an Irish inn. He also offered the astonishing — and perhaps medically improbable — information that Warren would “sit in his study chair for as long as a fortnight without getting up, eating nothing, and drinking as much as six bottles of whiskey a day.”


So, then, what’s missing? Full disclosure. Mr. Hooper served briefly as C. S. Lewis’s secretary in 1963, not long before his death. Their relationship was friendly; not entirely so Mr. Hooper’s relationship with Warren Lewis. By 1969, Warren had come to believe that Mr. Hooper was a sort of secretarial Rasputin, plotting for control over Jack’s literary legacy. “In his tireless, unscrupulous busybodyness Walter is the perfect Jesuit,” he snarled in his journal; “I dread the statements he may make after my death . . . which he will have the skill to make with seeming authority. I wish J[ack] had never met him.”

Here are the makings of what an ethicist might call the appearance of a conflict of interest: Mr. Hooper writes tell-all biographical squibs about a man who called him names but never shares this awkward background with his readers. One cannot learn of it from Mr. Hooper’s books about Lewis or his editorial addenda to anything in the Lewis canon, including the
Collected Letters. The Cone of Silence has descended even over works not by C. S. Lewis or edited by Mr. Hooper: in the published version of Warren’s journal, Brothers and Friends (Harper & Row, 1982), Warren’s anti-Hooper rants (described only as “certain passages”) are said to have been purged “at the request of and as a courtesy to the C. S. Lewis Estate”—Mr. Hooper’s employer. To read those rants one must go either to the unexpurgated journal itself at Wheaton College or Oxford or to Kathryn Lindskoog’s controversial anti-Hooper tract, Sleuthing C. S. Lewis (Mercer University Press, 2001), in which they are quoted. (Speaking of disclosure, I indexed Sleuthing.)

Nor is this the only historical cleansing in the
Collected Letters. Mr. Hooper’s short biography of Ms. Lindskoog in the back matter of Letters III makes no mention that the last few decades of her life were largely devoted to charging that Mr. Hooper forged manuscripts published as C. S. Lewis’s after his death, most notably The Dark Tower (1977; see Scott McLemee, “Holy War in the Shadowlands,” July 20, 2001). In fact, Ms. Lindskoog, who died in 2003, is primarily known today for her anti-Hooper books, which have attracted both vigorous criticism and admirers as diverse as Arthur C. Clarke, Ursula K. LeGuin, and C. S. Lewis’s friend Sheldon Vanauken.

In short, Mr. Hooper has made a habit of selectively sanitizing history. He continues to do so in
Collected Letters III by omitting all mention of serious charges made against him by two fellow players in the Lewis story, Warren Lewis and Ms. Lindskoog—players whose biographies he handles here and whose charges go straight to his own reliability. It hardly seems cricket. Surely it should not be up to the target of such charges, regardless of their merit, to decide whether their existence deserves mention.

But what does it all matter, if one is not a Lewis-worshipper committed to one side or another of the Hooper-Lindskoog wars? It matters because Lewis is one of those literary oddities whose readership continues to swell decades after their death. Tens of millions of people read him yearly. Many have their first, primary, or only contact with myth, literary criticism, literary history, philosophy, and theology through his writings. It’s a legacy worth keeping squeaky clean. That is why it is so unfortunate that Mr. Hooper, who has enjoyed a near-total monopoly on editing the Lewis oeuvre since Lewis’s death in 1963, covers with silence certain controversies in which he has himself been entangled.

It is hard to overstate the importance of Walter Hooper in the Lewis-reading world. Nobody can edit Lewis without the permission of C. S. Lewis Pte Ltd, which owns Lewis’s works, and they have rarely—if ever—given that permission to anybody but Mr. Hooper. His prefaces are attached to most Lewis books. He is the Lewis gatekeeper
par excellence. When he endows the canon with Bowdlerized history, leaving fuller truths to be told by fringe figures or the dead, it matters.

His motive cannot be a modest reluctance to insert his own role unnecessarily into the record, for in that case he wouldn’t have regaled us with the Irish pants-wetting story (and many others). But it needn’t be sinister, either, for the result to be sad. The result is that the
Collected Letters’ value as a resource is lessened. That is the problem with even an apparent conflict of interest: the appearance itself does damage. In a case like this, where recusal and disclosure have gone quietly AWOL, one is forced to wonder: what else may be missing?

Friday, June 29, 2007

Space Pioneers Wanted

A curious item from Agence France-Presse, June 19, 2007: the European Space Agency is asking for six volunteers to live in a cramped spacecraft simulator for 17 months to gather information about the psychological stresses of flight to Mars. The volunteers will be sealed in, eat space rations, and communicate with the rest of the human race only by radio.

Perks? All the Internet you want, I’m sure. Upload songs to your iPod, OK. You‘ll also have the exhilarating knowledge that you are furthering humanity’s Destiny in Space and plenty of time to study for a Master’s in astronautics. Drawbacks: No fresh food. No printed books. Harmonica practice may or may not be tolerated by shipmates. Severe limitations on art materials and field sports. No trees, grass, sun, sky, or soil. No walks in the fields of summer—no walks anywhere. No new faces. Just you and five other dedicated Euro-Russian souls sharing 550 cubic meters of space, the equivalent of a 20-by-20 foot room carved up into cubicles.

Certain realisms will be lacking, casting doubt (I would argue) on the relevance of the psychological data gathered: unlike a real space mission, you can opt out whenever you like and all emergencies will be imaginary. Nothing life-threatening can happen, unless the Russian Institute of Biomedical Problems, where you’ll be canned, suddenly burns down. On the other hand, since you won’t actually be flying to Mars, you’ll lack the sustaining sense of heroic self-importance that real astronauts would presumably enjoy.

According to Viktor Baranov, a Russian scientist with the project, “the problem is that it is very difficult to find healthy people for this kind of experiment.”

I’ll bet.

Trapped on Earth . . . . . . . . free in space!

What about sex? It depends. As of June, 2007, male applicants for the experiment were outnumbering females almost ten to one. Apparently America isn’t the only place where it’s harder to find women than men who are gung-ho about human
spaceflight. Why?

Maybe men are more socialized from childhood to perceive themselves as empowered or personally extended by machines. (About 80% of pickup-truck buyers are
men.) In any case, it is a fact that millions of people, most of them men, would rather not live on Earth at all. They believe that our only hope for survival and spiritual well-being is for as many of us as possible to go live on Mars, or elsewhere in space, instead.

Putting aside questions about cost and feasibility in favor of questions about fundamental wisdom, I can’t help but think of Body Integrity Identity Disorder (BIID), whose sufferers are passionately convinced that they cannot be happy until a healthy limb, usually a leg, has been amputated. The belief that leaving Earth for a mechanized habitat in space would produce not only personal happiness but global weal, as space-colonization advocates allege, is curiously reminiscent of BIID: an ultimately pathological urge to sever what is integral, to substitute a void for a living connection, to replace the organic with the prosthetic. In the Space-Ho! mindset, as in BIID, radical self-curtailment is perceived not as an option, luxury, or tool but as an existential necessity. Most BIID sufferers, incidentally, are men.

Sunday, June 10, 2007

Review:

Carrying the Fire: An Astronaut’s Journey
by Michael Collins
Ballantine Books, 1975

Michael Collins, command module pilot for Apollo 11, produces better results than most professional nonfiction scribblers and does it without using a ghost writer. He’s likable, snarky, and honest; surely this is the best-written of all the astronaut books. Particularly relevant, over 30 years after the book’s appearance — especially since President Bush announced his Vision for Space Exploration in 2004 — is his take on whether we should have sent anyone to the Moon at all.

Gemini 10 crew, July 18, 1966:
John Young, Michael Collins (at right).

collins


Collins favors the Vitamin F theory: frontiers make societies healthy, we need one, space is it. He is a more articulate exponent of this view than, say, Robert Zubrin (de facto leader of today’s Mars-colonization movement), and never radiates Zubrin’s over-the-top messianism, but his talking points are much the same: Columbus, the old West, the expansive essence of humanness. Speaking of the American frontier, he writes: “Some people were never content to huddle in protective little clumps along the East Coast, but pushed westward as boldly as circumstances permitted. When horizontal exploration met its limits, it was time to try the vertical, and thus has it been since, ever higher and faster.”

It’s not surprising that an astronaut sees space travel as the cutting edge of human history. With cosmologists it’s cosmology, with evolutionary biologists it's evolutionary biology, with computer scientists it's artificial intelligence, with postmodern culture scholars it’s postmodernism. Would morticians nominate embalming, sanitary engineers garbage-collecting? Perhaps not: some professions discourage self-importance by their very nature.

Despite what Collins says, the people back East weren’t “huddling” in 1850. They were living the usual various, vivid, ridiculous, heroic, dastardly range of lives that people have always lived. A few, like Henry Thoreau, were exploring a completely different kind of frontier while the Indian-killers and land-stealers were pushing so boldly westward—a frontier that anyone could tackle, even without (especially without) a Conestoga wagon or a Saturn V booster under their butt. Even in old, gray, frontierless Europe, life was not exactly empty of interest: while we were busy conquering Florida and Kansas, the British bourgeoisie were busy inventing the novel.

No. It won’t do, because it’s not historically true: there is no inherent virtue in frontiers. When our intake of Vitamin F was at its height we were spoiling the land, reaming the Indians, raping Africa, staggering pell-mell toward the Civil War. And regardless of whether our frontier days were a period of moral and democratic glory, which they weren’t, their mass migrations and resettlements bear at most a weak analogical resemblance to the “ever higher and faster” records set by aerospace technology over the last half-century — the propulsion of a few unusual people like Collins via ever more expensive, destructive, and wasteful machines to ever more titillating altitudes until, after Apollo, the inevitable undignified slippage back down the impossibly steep curve of exponential difficulty. (A round trip to the Moon takes a week; a round trip to Mars would take two years.) Space travel can never be an option for large numbers of people: that is as technically impossible as giving everyone on Earth a mansion the size of Manhattan. “Now we have the capability to leave the planet,” Collins writes, “and I think we should give careful consideration to taking that option”—but what you mean we, Kimosabe? No “we” that I, or anyone I know, belongs to. Ditto for about 6.5 billion other folks.

Collins falls into an interesting self-contradiction on the question of Vitamin F. First he expresses head-shaking bafflement at the possibility, articulated after Apollo 11 by Harvard University biochemist Dr. George Wald, that young people might “see in this an exercise of the old and well-entrenched, an exercise in great wealth and power, heavy with military and political overtones. I am afraid that they will feel a little more trapped; a little more disillusioned, a little more desperate.” Collins comments: “I am shocked to think that what we have done causes anyone to feel ‘disillusioned’, much less ‘desperate’ . . . [This] defeats me completely. I hope it’s not a widespread attitude; I simply cannot believe that it is” (458).

He should have tried harder. Is it really impossible to imagine that young people might be depressed by the claim, feverishly trumpeted and celebrated across our society from TV to headlines to ticker-tape parades, that humanity’s finest, freshest hope is a place few of us will ever see? Is it odd to be discouraged by the thought that the greatest triumphs of the human spirit will henceforth depend on hierarchic federal bureaucracies, humungous corporations, and throwaway high-tech marvels the size of skyscrapers? What’s my place on the great new frontier? Drive my little commute, pay my little taxes, cheer my little cheer as the picked handful blasts skyward? That youthful sense of entrapment that Collins finds so inexplicable is the frontier spirit he thinks he believes in. It’s the frontier spirit fighting for its life against the space mystique’s implication that most of us can have nothing to do with the future except to be left behind by it — after helping pay for it.

Self-contradiction appears a few pages later, when Collins contemplates the Apollo 11 astronauts’ own post-coital letdown. He views Buzz Aldrin’s severe depression after Apollo — Aldrin was the second man to walk on the Moon—as caused by lifelong parental pressure and the lack of anything big to follow the Moon trip: “Suddenly it is over: Buzz the pilot fish has been thrown clear of the shark Apollo and is swimming around, desperately looking for another streamlined creature of speed and danger to attach himself to. There aren’t any, Buzz . . .” (470). As for himself: “I share with [Buzz] a mild melancholy about future possibilities, for it seems to me that the list of exciting things to do here on earth has diminished greatly in the wake of the lunar landings. I just can’t get excited about things the way I could before Apollo 11; I seem gripped by an earthly ennui which I don’t relish, but which I seem powerless to prevent” (471).

Strange that Collins cannot see Aldrin's depression and his own ennui, his own sense of diminished earthly possibility, as illuminating the unease of others. The stimulant properties of human space “exploration” — which I put in quotes because astronauts can only fly to places that have already been mapped in detail by remote control — are inevitably temporary. Human space flights are not ground-gaining leaps but gravity-defying stunts, and it is the nature of stunts to end and, if repeated, to pall. It is no mystery that the later Apollo missions were yawned at by the same millions who went nuts over the first one. The dialectic of pleasure decrees the inevitable rise and fall of all flowery, showy joys that cannot take root in everyday life.

Nor let’s forget the money. Collins answers the claim that “it is immoral to support exploration while our cities rot” by arguing that exploration “cannot be examined on an either/or basis, that our national economic and budgetary process prevents the simple transfer of funds from one project to another, that without a space program our cities would still rot” (473–4). But “simple transfer” is a straw man. Resource competition is real, even if “simple transfer” isn't. Federal money spent in one place adds to the pressure against spending money in all other places. NASA’s human-spaceflight budget and (far worse) the military budget have indeed helped rot our cities, not to mention our farmlands and our wildernesses. They have narrowed our choices and dimmed our prospects and starved real space exploration, the robotic kind (e.g., Voyager). But, alas, it would take an honesty even more extraordinary than Collins’s—and he is honest, and I like him, and I am deeply fascinated by the program in which he participated—to acknowledge that the effort that shot him around the Moon and put his name in the history books was paid for by cheating the hopes and needs of others.

Surprisingly, Collins advances the silly chestnut that seeing Earth from above would make people less nationalistic: “I really believe that if the political leaders of the world could see their planet from a distance of, let’s say, 100,000 miles, their outlook could be fundamentally changed. That all-important border would be invisible, that noisy argument suddenly silence.” Exactly what H.G. Wells and others said in the early days of airplanes, which have brought us mostly strategic bombing, New Zealand raspberries in January, and highly polluting, spiritually dead globetrotting for the middle-class masses. (Yeah, I’ve done it too.) Collins buys the no-borders-from-above myth because he buys the core fallacy of the space movement — the notion that spiritual realities are shaped by physical situations. Go West, young man, to be a better young man. Fly higher, young woman, to be a better young woman. Blast high enough to see all men as ants and you will understand that all men are brothers. But it has never worked that way, and it never will.

Thursday, June 7, 2007

SAT Scores

It’s SAT season. For the last week or so, all across this mighty land, millions of high-school kids who took the SATs in May have been getting their scores online. A young friend just e-mailed to tell me that she got 760 in critical reading, 640 in math, and 680 in writing.

What do these numbers mean? Well, arguably, a lot. Most college admissions offices still look at them and get very yippy and excited when they see a combined score in the top percentile or two. On the other hand, arguably, not much.

First, let’s be clear that the private nonprofits who administer the SATs, PSATs, and GREs, namely the College Board and its drooling hunchbacked assistant, the Educational Testing Service, were not appointed by God. They’re just people in offices who make this stuff up. Nor should we be fooled by the word “nonprofit” into thinking that they’re doing it for our own good: “nonprofit” is not a synonym for “volunteer.” Careers, paychecks, publications, and reputations are all on the line. Testing, even nonprofit testing, is a business, not a charity.

Second—let the joyous news be spread—it’s not a science, either. SATs don’t measure anything, any
thing. Your “scholastic aptitude”—the original words behind the “SA” in SAT, which were officially dropped in 1994, leaving the acronym to float free—is not a fixed thing in your head, like an electrical charge or a bullet from an old war wound. It’s a construct, a fiction. An SAT score’s value, like that of a $20 bill, exists only because people have agreed to agree that it exists. And how are they persuaded to do that? Well, one little trick that is played to make the SATs look more objective, scientific, and reality-based is hidden in plain slight. All the scores end in zero.

Ever notice? You can’t get 761 or 653, only 760 or 650 or some other multiple of 10. That zero could be deleted right across the board and all SAT scores would still convey just as much information, or as little. Therefore, all SAT scores are inflated by a factor of at least 10. It’s worse, actually, because the section scale doesn’t go from 0 to 800 but from 200 to 800. That means there are really only 61 possible scores for each separate section of the SAT: i.e., 200, 210, 220, and so on, on up to 800. Dividing 800 (apparent resolution) by 61 (actual resolution), we find that all SAT scores are actually inflated by a factor of about 13.

In hard-science land, they would say that the SAT score displays
spurious resolution. (Trust me, I’m an engineer. Heh heh.) If a hard-science person announced that they had measured some quantity on a scale of 800, they would be implying in no uncertain terms that they could measure differences or changes of one part in 800. If they could only measure differences or changes to one part in 61, they would use a coarser scale—one that doesn’t pretend to record many more than 61 distinctions. Tacking zeros onto one’s numbers to make them sound bigger, more important, and higher-resolution is pseudoscience. It’s bad.

But by all means, take the SATs or GREs or whatnot if you must. I did. We all have to jump through some hoops to earn our sardines from the trainers. Just remember that you are not your score on the ACT, SAT, GRE, IQ, LSAT, or any other obnoxious bite of alphabet soup. Nobody is.

I’ve always felt that a bad attitude—on test day, not during the prep phase—is a good ally. Try thinking rebellious thoughts, feeling terroristic feelings, as you pencil in the little ovals or click on the screen choices. When I sat down in the early 1990s to face the computerized GREs (Graduate Record Exams, like the SATs only meaner), I was positively pressurized with hatred and loathing. The verbal portion threw me question after question, even multiple-choice
vocabulary questions for Chrissake (what do they prove?), getting steadily harder and harder, definitions and analogies and antonyms. Some little block of code was looking for my limit, trying to “adapt” (as the GRE brochure put it) to my supposed level of ability. Finally the test came up with the word scission. I’ve never seen scission in print before or since, but, having used scissors since kindergarten, I was able to guess its meaning correctly (“a cutting, dividing, or splitting; separation”). Choice (c). Take that, you stinking machine!

And then something, apparently, snapped. Not in me—in the stinking machine. The remaining questions were all absurdly easy, and I cruised through them bathed in a sense of cold, bitter triumph. John Henry beats steam hammer, and I didn’t have to die for it.

Take-home message: Tests are designed to classify and control us, not benefit us. And no matter how many numbers they spew, they are not science.

Monday, May 28, 2007

Atheism Deserves Better

I’m not an atheist, but I’m seriously tempted to send money to the American Atheists. Their ad is adorable: a happy nuclear family bathed by sunset light and accompanied by the story of how the good-looking daughter was persecuted by her high-school principle for not participating in pre-game school prayers. The AA went to court on her behalf and won. Good show!

Happy atheists.


Besides what they’re for, look at what they’re against: waxwork TV evangelists, public stoning of fornicators, self-flagellation, and so much more.

The big problem with atheism is its too-frequent religiosity. For the last few years, the U.S. media have been all a-titter about the prominent intolerance of atheists like Richard Dawkins (“
faith is one of the world’s great evils, comparable to the smallpox virus but harder to eradicate”) [1] and Christopher Hitchens (“religion poisons everything”) [2]. This kind of atheism is nothing new, but it is tragic in the same way that bad religion is tragic: as the betrayal of a good thing.

Consider David Lazare’s May 10 review of Dawkins’s and Hitchens’s latest books in
The Nation [3]. Lazare, referring to the 2006 Templeton Foundation prayer study which found that systematically prayed-for surgery patients fared no better than controls [4], chortles that the study shows "praying for a quick recovery is on a par with crossing one's fingers and wishing for a Mercedes." Christian disavowals of the study itself are in bad faith: "People like" British theologian Richard Swinburne, who criticized the study—you know, those people—would surely have trumpeted positive results as proof of God's existence.

And so, self-appointed Lancelot of rigor and reason Lazare rides his self-righteousness at full gallop into an intellectual wall. The fact is that well-known Christian writers have denigrated the possibility of such prayer experiments for a long time. George MacDonald wrote in 1885,

As to the so-called scientific challenge to prove the efficacy of prayer by the result of simultaneous petition, I am almost ashamed to allude to it. . . . That God should hang in the thought-atmosphere like a windmill, waiting till men enough should combine and send out prayer in sufficient force to turn his outspread arms, is an idea too absurd. . . . A man capable of proposing such a test, could have in his mind no worthy representative idea of a God, and might well disbelieve in any: it is better to disbelieve than believe in a God unworthy. [5]

George MacDonald, 1824–1905

Nor am I quoting some long-forgotten nobody; the book is still in print. More recently, C.S. Lewis wrote that such a test would be impious and, even if apparently successful, would not “prove the Christian doctrine at all.” [6] He was right. Real science, as exasperated scientists have been trying to explain for years to proponents of Intelligent Design, doesn't
do the supernatural.

The sweeping, jocular ignorance with which Lazare, Hitchens, and Dawkins handle theology closely resembles the stock Creationist style of dissing evolution. Intellectual bigotry always flatters itself as common sense.

It’s too bad; atheism deserves better.

-------------------------------------------------------------------------
[1]
www.thehumanist.org/humanist/articles/dawkins.html (accessed May 28, 2007)

[2] Subtitle of Hitchens’s book
God is Not Great, 2007.

[3]
http://www.thenation.com/doc/20070528/lazare/3 (accessed May 28, 2007).

[4] “Long-Awaited Study Questions the Power of Prayer,” Benedict Carey,
New York Times, March 31, 2006: http://www.nytimes.com/2006/03/31/health/31pray.html?ex=1301461200&en=4acf338be4900000&ei=5088

[5] In fact, it can be downloaded for free as an e-text.
Unspoken Sermons, Second Series, by George MacDonald, available from Johannesen Publishing: http://www.johannesen.com/index.html

[6] In the essay “The Efficacy of Prayer,” which first appeared in
The Atlantic Monthly, Jan. 1959.