Thursday, June 30, 2011

Fusion Jazz: Why Fusion Power Is Doomed


The National Ignition Facility (NIF) is a US-funded project to produce power from fusion, that is, by smacking together light atoms rather than busting up heavy atoms as in ordinary nuclear reactors. At the NIF, flashes of light from giant lasers zap tiny pellets of hydrogen, triggering fusion.

A recent New York Times article on the $4-billion facility illustrates the difficulty of staying oriented to reality in the face of mind-boggling gadgets as big as office buildings that tap into forces of nature most of us don’t really understand. The article, “Fusion Experiment Faces New Hurdle,” states that

The tipping point for nuclear fusion is “ignition,” the moment when the lasers release the same amount of energy that is required to power them.

Who doesn’t love ignition? “We have ignition!” is what the Apollo mission controllers used to say just as the flames burst out of the giant rocket and it flew up to glory. I often repeat the same phrase, probably annoyingly, when I light a fire in our woodstove. Ignition is when things light up and really get going. But it’s elusive for fusion, as the NIF’s backers admit: people have been trying to figure fusion out for half a century, lured on (and luring others on) with the promise of clean, inexhaustible electricity . . . someday. Ah, if only we could achieve ignition. The facility’s backers claim that they will do so within a year.

Bringing hype to Earth, too.

But there’s something fishy about the goal itself. First, in almost any other setting, “ignition” denotes the beginning of a self-sustaining reaction, energy from burning fuel causing further burning of fuel. But the fusion reactions that the NIF proposes to “ignite” are not even close to self-sustaining in this sense: they will flicker into being and then go dark a fraction of a second after the giant laser flash ends. In this context, “ignition” is merely the condition in which as much energy, in the form of raw heat, is released by the fusion reaction as is required, in the form of laser light, to make it go. (In the NIF’s words, “ignition is defined as fusion energy output greater than or equal to laser energy incident on the target.”) Let’s assume, generously, 100% efficient lasers that deliver precisely as much energy to the fusion reaction as they consume in the form of electricity. What the NIF proposes to achieve, therefore, is not what most of us would call “ignition” at all but, at best, energy breakeven: heat out equals electricity in.

But treating electricity and heat as comparable forms of “energy” is as dubious as treating pizza and sewage as comparable forms of “matter.” An ordinary coal-fired or nuclear power plant, which releases heat from fuel to boil water to run turbines to spin generators to churn out electricity to run our cell phones and light bulbs, is forced by basic physics to throw away about 60% of the heat it obtains from fuel. You can turn electricity entirely into heat, but you can’t turn heat entirely into electricity. This is because heat is perfectly disorganized energy, energy flying every which way, while electricity is highly organized and can be used to do almost anything. Likewise, you can turn 100% of a pizza into sewage (by eating it) but you can only turn a fraction of your sewage into pizza (by using it as fertilizer and growing wheat, tomatoes, and dairy cows from scratch). Electricity is energy pizza, and heat is energy sewage. The NIF is proposing to turn X units of pizza into X units of sewage, and this it calls “ignition.”

The NIF’s Holy Grail is therefore, to be blunt, meaningless (though the Times quotes it as if it made sense). If fusion is contending to be a viable power source, a less absurd goal would be to produce at least as much useful energy — not mere heat, but electricity as is consumed. Because of inevitable heat-to-electricity losses, this means that a fusion facility would have to produce about 2.5 times as much energy in the form of heat as it consumes in the form of electricity, counting not just its lasers (as the NIF does) but its power electronics, air conditioners, pumps, computers, bathroom lights, and every other scrap of overhead. And that goal will be much, much harder to achieve than mere input-output parity.

But even electric-in, electric-out breakeven would be, in itself, useless. A lump of cold rock has achieved “ignition” by that standard; a fusion facility that merely balanced electricity in with electricity out would be nothing but a cyclopean, multi-gigabuck paperweight. To be useful, it would have to produce a large electric surplus and do so at low enough per-kilowatt-hour cost to be competitive with all other available sources of equally clean (or cleaner), inexhaustible electricity, such as wind or solar power (with, let's say conservatively, enough storage capacity thrown in to make them dispatchable around the clock). And these competitors are getting cheaper all the time, according to standard learning curves.

The NIF’s apples-to-kumquats breakeven standard is, I believe, touted by fusion advocates because it is the only standard that fusion has a prayer of satisfying in this century — or any other. A fusion reactor that churns out net useful electric power at a per-kWh price competitive with realistic, equally clean (or cleaner) alternatives is almost unimaginable, even in the remote future — when those renewable alternatives will be even cheaper than today due to technological improvements.

It would be hard to set up a more hopeless contest. Fusion is a solid-gold tortoise starting a thousand miles behind a whole pack of very healthy, focused jackrabbits, all of whom already have a claw over the finish line.

Thursday, November 25, 2010

My Fifteen Milliseconds of Fame

A few weeks ago I posted an impulsive, long-winded comment about patterns of climate denialism on the website of Nature that said nothing which did not seem to me painfully obvious (comment #15330 here). I wondered if I was a time-wasting fool to write it at all, and blushed when I realized (too late, it is always too late) that I had repeatedly committed the awful typo “AWG” for “AGW” (anthropogenic global warming, an acronym favored among climate-change denialists). Argggh.

I was therefore surprised to receive an e-mail from Nature staff asking me if I would be willing to abridge the letter for publication in their magazine. Uh . . . well . . . yes! Nature is one of the two premiere general-coverage science journals in the world (the other is Science), and a chance to sound off on its Correspondence page is an honor not to be declined.

Here’s the letter as it appeared in this week’s Nature, with the editors’ heading:


An Intellectual Black Hole

News items on climate change are now regularly flooded with negative online comments (see, for example, go.nature.com/Lfigec). These tend to have certain features in common.

The comment writers struggle to find words that are emphatic enough to express their contempt for anthropogenic global warming and for the 97% or so of active climate scientists who accept its reality. They feel attacked, lied to and conspired against: they are intensely angry. Their comments are often pervaded by heavy sarcasm. In their view, climate scientists are not merely mistaken, but foes of truth and liberty. They see themselves as fighting a powerful enemy — a posture that can be addictive.

These comment authors rarely engage with the original science (in the above example, an article in Geophysical Research Letters). They have unplugged from the scientific discourse because they believe it to be driven by crypto-environmentalist (or crypto-communist) ideology. This conviction characterizes fundamentalisms: a true believer does not need to engage, they already know. The denialists charge that it is the scientists who refuse to consider the evidence; alternatively, scientists know all about it and are lying. The parallels with some creationist rhetoric are striking.

Certain standard fallacies and counterfactuals are held by these commenters as irresistible ‘gotchas’ — any one of which makes the idea of human-induced global warming absurd (and further thought unnecessary). For example, a popular line of argument is that because climate has changed in the past without human input, humans cannot be changing it now. Mistaken beliefs are held that scientists on the Intergovernmental Panel on Climate Change, climate modellers and other researchers ignore factors such as water vapour, urban heat islands and the Sun.

The peculiar danger of any full-blown conspiracy theory is that it can become an intellectual black hole, a one-way trip. Hope lies mostly in keeping people out of the hole, rather than trying to rescue those who have fallen in.

Larry Gilman

Sharon, Vermont, USA

Nature, Nov. 25, 2010, 468:508.


One of the bits I wish I could have retained from my original effusion is the following extended version of the “black hole” idea, which says how such belief systems can swallow a mind whole:

The peculiar danger of any full-blown conspiracy theory — and the belief that scientists are actually conspiring to fake “AGW” is often expressed in these [denialist] comments — is that it can become a sealed-off mind-bubble, an intellectual black hole with no way out. No evidence can convince you that the world is not a dream, if you choose to believe that it is one, because any possible evidence could itself be dreamed: likewise, no scientific facts can change the mind a person who believes firmly enough in a global “AGW” conspiracy because the only possible source of such facts — scientists who study climate — is known a priori by the believer to be corrupt.

Pleasant dreams.

Saturday, April 17, 2010

Dr. Strange Versus Global Cooling

Global-warming denialists often proclaim that in the 1970s scientists predicted global cooling, then in the 1980s suddenly changed their minds to global warming. Implication: scientists are always changing their minds, so don’t take them seriously when they declare that the world has been warming for decades and is going to get a lot hotter before it’s through. What’ll the big scare be tomorrow? Global polka-dots? Hah!

It’s effective rhetoric even though it makes no sense: So what if scientists have changed their minds — isn’t that what they’re supposed to do, when the facts warrant it? But the point is moot anyway because the story is false. There was no 1970s global cooling scare—not among scientists. A few scientists did suggest that cooling was a possibility, but the overwhelming majority, as reviews of the literature have shown, said that global warming was on its way. This view solidified in the 1980s with increasingly deep knowledge of atmospheric physics. And warming is exactly what we have seen since then and will continue to see: the first decade of the 21st century was, according to NASA, the hottest in the instrumental record, which goes back to before the Civil War.

Whence, then, the myth of the Great ’70s Global Cooling Scare? Several pop media outlets, such as Newsweek (“The Cooling World,” April 28, 1975), did actually publish scare stories on global cooling. These grossly misrepresented the state of the climate science; their contribution to human understanding has been entirely negative, as they now make excellent ammo for the anti-science crowd.

But not all popular media were fooled. Marvel Comics, trying hard in the 1970s to be hip on issues like race, drug addiction, and urban alienation, managed to get it basically right. In Doctor Strange No. 6, February, 1975, the good-guy sorcerer Dr. Strange and his lover/sidekick Clea are confronted by a bitter heroin addict while walking in Central Park. The man points to the sky and says:

dr strange 1.jpg

Right on! Well, the science is a little wrinkled — nothing is “cutting the oxygen,” and you can’t see the stuff that’s causing the greenhouse effect — but what do you expect from a heroin addict? In any case, Dr. Strange author Steve Englehart landed a lot closer to the mark than the high-paid babblers at Newsweek, Time, and elsewhere who were ignoring the scientific literature and pushing global cooling.

That one slightly awkward frame does not do justice to the trademark fluidity of artist Gene Colan’s pencils. Here it is in its full-page context (well-handled by inker Klaus Janson):

dr strange 2.jpg

The moral: if you want to understand the world, you are infinitely better off reading comics than listening to the likes of George Will and Rush Limbaugh, who are still dishonestly pushing the myth of a 70s global-cooling consensus among scientists.

But climate-change denialism is here to stay no matter how many comic books we read (and I read my share). Its star may rise higher or sink lower but it will never set, no matter how hot the climate gets, because no form of denialism, once deeply entrenched in a wide following, has ever totally disappeared from the face of the earth. As Dr. Strange says in the last frame, “It is beyond even a sorcerer supreme!”


Thursday, March 25, 2010

A Factoid is Born


A recent post by Nick Bilton of the New York Times, “Former Book Designer Says Good Riddance to Print,” claims that the day of the print book is finally fading. Most of Bilton’s blog post is about a blog post by the “former book designer,” Craig Mod, making these present remarks a blog post about a blog post about a blog post. Such is e-life.

Bilton likes what Mod says. Paper books are, or should be soon, on their way out. Accordingly to the appropriately named Mod, “Good riddance.” Bilton enthuses,

For hundreds of years, we’ve been consuming information on static pages, and for the most part, this content has been presented with a beginning, middle and end. Non-linear, digital platforms will prompt a new range of thinking about stories and how to tell them.

People started predicting this exact thing in the 1980s, when computers were already capable of implementing any textual revolution desired, but the promised era of nonlinear fiction — “hyperfiction” — never arrived and it never will. Hyperfiction is a no-starter because it’s a bad idea. It claims to crown readers with “choice,” to liberate them from the page, to raise them to co-creator status: but mousing around a network of texts is only pseudo-creative, like channel-flipping. As readers we are already authentically creative. We are creative when we simply read, most of all when we read well, constructing richly unique inner realities in response to the exquisitely sense-poor stimulus of the printed word. A book is not an information download for your head, it’s an invitation to get to work.


book_banner.jpg

Hyperfiction fails because it is based on the stunningly dumb notion that readers of print on paper are mere slaves of the “static pages.” How could any literate person think so? When all goes well, and it often does, the outwardly “static” reader, staring at the “static” page, is exploding with inward experience. Everyday reality can dissolve entirely into a waking dream of overwhelming intensity as one contemplates the moveless bugs on the paper. Engaged readers are co-creators, collaborative artists, and their working material is the story “with a beginning, middle, and end,” which predates print by millennia. Limitlessness, especially the pseudo-limitlessness of a folder full of branching hypertexts, bores. Limitation excites. Plot excites. Beginning, middle, and end excite. That’s not a prediction, it’s an observation.

Bilton also says:

Under Mr. Mod’s analysis, the common paperback and many other physical books are disposable. He writes, “Once we dump this weight, we can prune our increasingly obsolete network of distribution. As physicality disappears, so, too, does the need to fly dead trees around the world.”

In contemplating the computerization of reading — some amount of which I must think is fine, or I wouldn’t be posting this — it is simply weird to say that “physicality disappears.” Physicality doesn’t go anywhere. It’s transformed from paper and glue to flatscreens, batteries, power cords, plastic chassis, microchips, carrying cases of PVC fabric. Instead of paper — the longest-lived, toughest, most environmentally sustainable information-storage medium ever invented — the physicality of the book becomes a flow of short-lived, landfill-bound gadgets consuming large amounts of power in their manufacture (especially the chips, despite their small bulk) and smaller amounts to make the reading act possible moment by moment, all of it generated and distributed by a vast network of ugly and polluting devices.

Good reading itself, which must make us wiser and wilier perceivers of pattern, of choice, of irony and implication, perhaps even of beauty, cannot coexist with oblivion to these facts or with indifference to them. Every medium is a message (though never simply the message), sometimes many messages at once. What is the gadget medium saying? Nothing, according to the prophets of future schlock. “Physicality disappears.” Except — it doesn’t. Of course it doesn’t. The ultimate naivetee is not to romanticize paper, but to pretend that there is no message built into the gadgets: to blurt out for the millionth time the sophomoric pseudo-insight that content is the only thing that matters, when it isn’t.

What about greenishness? Might e-books really cause less environmental harm than paper books?

It depends on how you stack ’em. Pile the paper books high enough next to any gadget, and sooner or later eco-accounting must favor the gadget. A group called Greentech did a widely reported 2009 study claiming that the Kindle e-book reader is a carbon-footprint win over paper books: “on average, the carbon emitted in the lifecycle of a Kindle is fully offset after the first year of use.”

Unfortunately, it is not possible, without joining the “Cleantech network” for a sum that Cleantech does not disclose online, to read the actual report and decide for oneself whether this claim has any merit. I have not been able to discover any online mention whatever, despite wide coverage of the Cleantech report, of its methodology. How did they come up with their estimate that each Kindle displaces 22.5 books per year per reader? Is that 22.5 hardcovers, or 22.5 paperbacks, or some market-weighted average of the two? What about used books — the “carbon footprint” of which must be divided by the number of owners per book (at least 2)? Used books account for about 15% of US book sales by volume (8% or 9% by dollars), and the number has been growing lately. If Cleantech assumed 22.5 brand-new hardcovers per Kindle per year, then its conclusions are dubious indeed.

Plus, there’s more to life than carbon. What about the non-greenhouse impacts of manufacturing electronics, like mining rare earths, 95% of which reside in ever-friendly China and are mined with horrific destructiveness? Does Cleantech compare these non-carbon impacts? If not, why not? Do they count the greenhouse and other life-cycle impacts of the polyvinyl chloride carrying-cases that are usually purchased to protect e-readers like the Kindle? Again, if not, why not — unless to make e-readers look greener than they really are?

The broader moral is that no study’s merit can be judged without knowing how it got its answers. The Internet dwellers reported on the Cleantech report but asked no questions about methodology, were apparently oblivious to the possibility that there might even be such questions. An attractively counter-intuitive number generated by unknown methods appears, is diffused, is quoted and re-quoted (Googling “Greentech Kindle study” gets 217,000 hits).

And behold, a factoid is born: e-books are greener than paper books. But are they?

Wednesday, February 10, 2010

OK, I’m Compelled Already


The New York Times just featured a clutch of invited essays on the question “Is Manned Spaceflight Obsolete?” First of all, the phrase is odd: obsolescence implies the appearance of a better alternative for something that was once adequate for some real purpose, and it’s not clear that “manned spaceflight” (why not the gender-neutral “human spaceflight”?) ever was a good or useful thing—though it has sometimes, without doubt, been an intensely exciting thing.

But verbal quibbles aside, let’s look at one of the Times’s pro-spaceflight essays, by Seth Shostak, senior astronomer at the SETI institute—the folks who scan radio waves from space for signs of intelligence. (SETI, by the way, is a good project that has nothing to do with human spaceflight.) Shostak sets out to tell us “Why Hominids and Space Go Together.” He writes that “There are good – even compelling – reasons for a human presence in space.”

Boy—if these are the compelling reasons, the debate is over.

(1) “When it comes to looking for life on Mars, it’s been said that a human explorer could survey that world’s rusty landscape at least 10 times faster than a rover. If we want to know if life is a miracle or merely a cosmic commonplace, human exploration may be essential.”

Shostak’s putative tenfold acceleration would only happen once humans got to Mars, which would take about 10 times longer to accomplish than getting a team of very good robots there while costing 10–100 times more. Hmm. Also, Shostak equates getting the science faster with getting it period, a non sequitur. Even if sending humans were a faster way to get the science, that wouldn’t imply it was the only way. Shostak’s illogic reveals that the whole argument is a pretext. He doesn’t really want to send people to Mars to do faster science there, and might as well admit it.

(2) “Second, we are living on a world with limited real estate and finite resources. Both are expected to become critically stretched within a century. Frankly, homo sapiens will be a flash in the pan if we don’t get some members of our species off the planet.”

In other words, 99.99% of us are doomed to drown in our own excrement, so a lucky few should be sent forth in trillion-dollar lifeboats. This scenario is low-grade science fiction, not serious policy, especially given that if Earth society collapsed, independent long-term survival of Shostak’s projected colonies would be impossible. It takes an entire industrial economy to manufacture every single one of the thousands of touchy widgets essential to any space vehicle or dwelling. In space, no one can hear you scream for replacement parts. Even if we built Shostak’s space lifeboats, they would be roped firmly to our global Titanic.

(3) “Third, there’s the enormous appeal of space flight to young people. Ask any kid what interests them more: constructing autonomous rovers or going to Mars? The answer is obvious and so is the implication for NASA’s future.”

This is circular: We need to put people in space so that kids will grow up psyched to to work for the organization that puts people in space. Not to mention the silliness of implying that NASA will continue to have an adequate hiring pool only if it pulls off an endless string of the most dramatic human spaceflight stunts imaginable. If such super-lures were really needed to suction bright engineers out of the educational system, our civilization would collapse: almost all real-life engineering work is unglamorous in the extreme compared to any space project, including a robotic one. (Ask the people who worked on Voyager if they found it boring.) Finally, the belief that we should shape national policy around what most excites 10-year-olds would lead to some very bizarre outcomes, if consistently applied. Again, Shostak doesn’t mean it.

(4) “And finally, there’s this: we need a frontier. Some part of each of us wants to ‘boldly go’, to explore and experience the unknown. The claim that stepping across the threshold of the unknown is too costly or too dangerous wouldn’t have impressed Magellan or Lindbergh.”

Even if it were true that we need a literal, physical frontier to be a healthy culture, which it isn’t, human space travelers never have and never will explore anything “unknown” because they aren’t going anywhere that hasn’t been thoroughly mapped by robots first. No astronautic boot is ever going to touch any lunar or planetary landscape that has not already been photographed at 1-meter resolution from orbit—that’s a mission-planning basic. The real explorers are therefore going to remain robots whether we send people in their wake or not. “Magellan”? Exactly — a robotic NASA probe, launched in 1989, whose radar revealed the surface of cloud-shrouded, kiln-hot Venus as no human space traveler ever could have or ever will, and did so for about half the net cost of a single Shuttle trip to nowhere.

The real modern Magellan . . .
(note human figure lower left)

. . . and a glimpse of what it found
(shield volcanoes on Venus, imaged
area about 300 miles across):

Shostak’s “most compelling” reasons compel us only to recognize that the case for human spaceflight consists entirely of sentiments derived from a rather crude and cruel species of science fiction. When Worlds Collide is a fun flick, and I worshipped it when I was 10, but you can’t build a life on lifeboat fantasies.

The advocates of space settlement say they want to save the human race, but actually they’re writing it off. What they most yearn for, I think, is simplification: no more chaotic, shit-besmeared billions, no more unpredictable Earth, but a cozy cadre of like-minded space fans thinking noble thoughts enwombed in an artificial environment as mysterious and unmanageable as the inside of a soup can. Even if it were possible, it would be Hell.

Sunday, January 24, 2010

All This Has Happened Before: The XMRV Imbroglio, Act II


The now-famous 2009 Science paper by Lombardi and colleagues [1] showing a strong correlation between the human retrovirus XMRV and chronic fatigue syndrome (CFS) did not parachute into the middle of nowhere. As Hilary Johnson’s Osler’s Web recounts, feelings have been running high on this subject since the 1980s. Careers have been devoted—especially, but not only, in the United Kingdom—to the idea that CFS is not a physical disorder but a psychological one. The Science paper was bound to be unpleasant reading for anyone who had treated scores of CFS patients on the psychological theory and put their professional credibility on the line to defend that theory.

Opponents of physical-cause theories of CFS were therefore cheered by the appearance in January, 2010 of an article appearing to refute the
Science piece. The new article, “Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome,” by Erlwein et al., [2] was hailed as showing that the Science piece was a false alarm: “Scientists’ claim to have found the cause of [CFS] is ‘premature’,” headlined The Independent, a British newspaper. [3] Nothing to see here, folks—it’s all over—move along, move along.

But as the Cylons repeat so annoyingly on
Battlestar Galactica, “All this has happened before; all this will happen again.” Indeed it has, and indeed it will. Consider a recent case involving another highly charged subject, genetic engineering of crops.

Act I: Threatening Breakthrough

In 2001, David Quist and Ignacio Chapela of U.C. Berkeley announced in
Nature that transgenes—chunks of DNA artificially transferred from one species to another—had been found in traditional maize landraces in central Mexico. [4] (A “landrace” is any locally-adapted domestic plant breed.) The study seemed to show that modified genes could spread uncontrollably in the real world, as opponents of genetic engineering had always warned. If so, transgenes might threaten the character or continuance of the Mexican maize landraces, altering the Mexican diet and the global fate of corn itself. As Quist and Chapela put it,
Concerns have been raised about the potential effects of transgenic introductions on the genetic diversity of crop landraces and wild relatives in areas of crop origin and diversification, as this diversity is considered essential for global food security. Direct effects on non-target species and the possibility of unintentionally transferring traits of ecological relevance onto landraces and wild relatives have also been sources of concern.
Act II: Triumphant Counter-Study

Genetically modified corn is big business: 25% of the world’s corn, including 80% of US corn, is genetically modified. [5] It is therefore not surprising that the 2001 paper was intensely attacked for minor methodological flaws. Under pressure,
Nature took the unprecedented step of publishing a quasi-retraction (not approved by the authors, Quist and Chapela) stating that “the evidence available is not sufficient to justify the publication of the original paper.” [6]

The debunking process seemed complete in 2005, when a paper in the P
roceedings of the National Academy of Sciences reported finding not a single transgene in thousands of maize samples from the same parts of Mexico examined by Quist and Chapela: zero, repeat, zero transgenes in 153,746 sampled seeds. [7]

In certain quarters, the relief was palpable. The zero result was taken as grounds not for questioning the second study but for scolding the first. In a 2006 review [8] of the preceding decade’s nine worst “biotech gaffes,” the editor of
Nature Biotechnology spanked Quist and Chapela for being partly “culpable for lukewarm public acceptance and stigmatization of transgenic crop technology.” That’s right, culpable as in deserving of blame. “With so many of the groups ideologically opposed to transgenic crops able to exploit the media, scare the public and perpetuate myths and conspiracy theories about genetic engineering over the Internet,” the editor harrumphed, “prestigious journals [e.g., Nature] should be aware of the long-lasting damage resulting from their willingness to widely publicize results that may be contentious or equivocal.” The maize-transgene scare had “certainly contributed to the decline of European agbiotech.” Very, very naughty.

In the backlash against the original article the academic career of one of its authors, Chapela, was almost destroyed by denial of tenure, but he won in court. [9]

Act III: Vindication

In late 2008, a paper in
Molecular Ecology [10] vindicated Quist and Chapela on the basis of tens of thousands of Mexican maize samples. Once again, transgenes had been found in Mexican corn landraces. What is more, the authors of the new study explained exactly why different studies had been getting different answers and explained the false-negative results of the 2005 Proceedings of the National Academy of Sciences paper in detail. Even the lead author of that 2005 paper, invited to comment in the same issue of the journal, sportingly pronounced the new piece “a very good study.”

It does not appear, however, that the editor of
Nature Biotechnology ever retracted his attacks on Quist and Chapela’s integrity. Indeed, I know of no apology or retraction from anyone who had accused Quist and Chapela of bad science.

Observations

The parallels of the Mexican transgene saga to the XMRV saga are striking: In Act I upsetting primary science appears and solemn warnings against taking a single study too seriously are issued. In Act II, a single study offering to overturn the original is hailed with relief and trumpets. In Act III, the primary science is vindicated—at least, it was with the maize transgenes. There has been no vindicating third act yet for the XMRV/CFS theory, but there are at least three reasons to bet on one.
First, getting a zero result where multiple, independent previous studies have got a nonzero result should be a red flag with any new XMRV study, just as it should have been during the Great Mexican Maize Mystery. Zero? Are you sure you’ve taken the lens cap off the camera?

Second, both zero-result studies—the maize study and the UK XMRV study—used tests for the presence of the target gene or virus different from those used in the studies they challenged. In the case of the maize transgenes, this turned out to be the crux of the problem. Whittemore Peterson states that “the recent study published in the U.K. . . . used non-validated PCR and whole blood PCR assays.” [11] Moreover, the UK study did not select candidates for study using the Science study’s standard, thus producing a twofold apples-and-oranges problem.

Third, the UK XMRV study was rushed through peer review in a few days, according to the Whittemore Peterson Institute. [12] [PS: Actually, as a friendly commenter points out, this is according to the online journal where the piece was published—here’s a screen shot from the source:]
Science is done by human beings, not gods or robots. Its glory is that its method of community-scale, independent checking and criticism almost always enables the production of increasingly accurate knowledge, over time, by a group of people—scientists—who are not increasingly wise or perfect. But in the short term, especially where powerful economic and other interests are involved, the data that some people want to see have a tendency to appear—or the data they do not want to see may be long delayed by diversion of funding and other tactics. Reality wins in the end, but the end may be a while in coming.

Extraordinary claims do require extraordinary evidence, and it is easy to be jerked around by a single paper here and a single paper there. But in the case of the XMRV imbroglio, my money is on the Whittemore Peterson people, who have clearly done the more thorough work. In any case, more studies are under way, and if they are of adequate quality, they will settle this dispute—scientifically.

Unfortunately, that will not necessarily settle the dispute altogether. To answer any scientific question in a way that some people strongly dislike seems to automatically give birth to a new form of denialism. Evolution and global warming already have millions of entrenched, unconvincable unbelievers: is XMRV next?

--------------------

REFERENCES

Note: I will e-mail copies of the subscription-required articles to anyone who asks me for them: Lgilman [ a t ] myfairpoint [ d o t ] net

1. http://www.cfids-cab.org/MESA/Lombardi.pdf

2. http://www.plosone.org/doi/pone.0008519

3. http://www.independent.co.uk/news/science/scientists-claim-to-have-found-the-cause-of-me-is-premature-1859003.html

4. Quist, David and Ignacio H. Chapela, “Transgenic DNA introgressed into traditional maize landraces in Oaxaca, Mexico.”
Nature, 414 (Nov. 29, 2001), 541–543.

5. http://www.gmo-compass.org/eng/agri_biotechnology/gmo_planting/341.genetically_modified_maize_global_area_under_cultivation.html

6. http://www.nature.com/nature/journal/v416/n6881/full/nature738.html

7. Ortiz-Garcia, S., et al., “Absence of detectable transgenes in local landraces of maize in Oaxaca, Mexico (2003–2004).”
Proceedings of the National Academy of Sciences, 102(33), August 30, 2005, 12338–12343. Available at http://www.pnas.org/content/102/35/12338.full.pdf.

8. http://www.nature.com/nbt/journal/v24/n3/full/nbt0306-270.html

9. http://www.counterpunch.org/tonak06262004.html

10. PiƱero-Nelson A., et al., “Transgenes in Mexican maize: molecular evidence and methodological considerations for GMO detection in landrace populations,”
Molecular Ecology (2009) 18, 750–761.

11. http://www.wpinstitute.org/news/docs/WPI_pressrel_011410.pdf

12. http://www.wpinstitute.org/news/docs/WPI_Erlwein_010610.pdf

Friday, October 23, 2009

Night of the Living Benthamites

A scientist friend has drawn my attention to an article on the moral uncertainties posed by climate change. In the article, “Anthropogenic climate change: Scientific uncertainties and moral dilemmas” (in Physica D 237;2008: 2132–2138), Rafaela Hillerbrand and Michael Ghil say they know “a way to correctly incorporate all the relevant uncertainties into the decision making process” (2133). Their suggested method is Expected Utility Theory, an offshoot of game theory, decision theory, and economics.

Hillerbrand and Ghil are “utilitarians,” meaning that for them “the morally correct action is one that maximizes overall human welfare” (2134). Expected Utility Theory is the formal, logical expression of that philosophy. Here is Hillerbrand and Ghil’s quasi-mathematical definition of it (brace yerself!):

[single-click image for larger view, then go back a page on browser to return here]


I call this
quasi-mathematical because, for all its Greek symbols and function talk, the idea that people have “preferences” that can be assigned specific numbers is pure whimsy. One-dimensional preferences, as a few seconds of introspection will confirm, don’t even exist. Your desire for a cheeseburger cannot, even approximately, be assigned a unique Ui in the domain of real numbers R: it varies with time of day, how much you have been conditioned by marketing, your desire for healthier food, your desire to indulge in unhealthy food, what you remember of Fast Food Nation or The Omnivore’s Dilemma, a hundred other aspects of self. It is not a quantity at all. So nobody actually computes numbers to plug into equations like this: even economists and utilitarian philosophers aren’t that crazy. All this jargon and notation are gestural, not operable. They are there to lend an air of scientific authority to a philosophical creed first advanced a couple of centuries ago by Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873). The notational fancy dress adds nothing to the basic belief, which many have challenged and which strikes me as essentially crude. “The needs of the many outweigh the needs of the one,” as Spock intones annoyingly in The Search for Spock (1984).

But one needn’t get into the shortcomings of Utilitarianism to find problems with Hillerbrand and Ghil’s approach. A big one is their assumption that if we spend money to mitigate climate change, we will be making people worse off in the here-and-now for the sake of people yet unborn. “[M]itigation of major changes in future climate,” they say, is a “predominantly altruistic goal”; “Investing in the mitigation of climate-change effects means foregoing other investments” in human welfare (2134). This assumption is basic to Hillerbrand and Ghil’s whole argument, because if fighting climate change does
not require us to rob a present Peter to pay a future Paul then there is no trade-off, no dilemma, and nothing to write papers about. In particular, Expected Utility Theory (EUT) is irrelevant even if valid.

But the assumption is questionable, even by orthodox economic measures. The Intergovernmental Panel on Climate Change (IPCC) found in 2007 that by 2030, “macro-economic costs for multi-gas mitigation, consistent with emissions trajectories towards stabilization between 445 and 710 ppm CO2-eq, are estimated at between a 3% decrease of global GDP [Gross Domestic Product] and a small increase” (2007 IPCC report on Impacts, Adaptation, and Vulnerability,
Summary for Policymakers). “Small increase” means that the world economy might well be
strengthened by mitigation—a negative cost—as market distortions are rectified, waste is cut, smart technologies are deployed, and positive side-effects kick in. For example, “near-term health co-benefits from reduced air pollution as a result of actions to reduce [greenhouse] emissions can be substantial and may offset a substantial fraction of mitigation costs” (ibid.)

To support their claim that climate-change mitigation is inherently dilemmic, Hillerbrand and Ghil cite figures (in a section titled, ironically, “Jumping to Conclusions”) showing that it might cost $450 billion per year, about 1% of global GDP, to mitigate climate change sufficiently to avoid “major hazards,” whereas providing 80% of rural Africa with water and sanitation by 2015 would cost “only US$1.3 billion per annum” (2134). This is their only example of a mitigation “trade-off.”

But as their own use of the word “only” flags, this argument is weird. $1.3 billion is 0.3%—one third of one percent!—of $450 billion. The US spends $1.3 billion on the Iraq War every
two days. [1] If the world were able and willing to pony up $450 billion for what Hillerbrand and Ghil consider the “predominantly altruistic” goal of climate-change mitigation, wouldn’t it probably be able and willing to come up with a few altruistic pence for African health as well? Wouldn’t it have made more rhetorical sense for Hillerbrand and Ghil to pit an expensive here-and-now opportunity against the possible costs of climate mitigation?

But that is the least of this example’s problems. A bigger one is that Africans happen to be more endangered by climate change than almost anybody else—especially in the water department. To quote the IPCC’s 2007 report again,
Africa is one of the most vulnerable continents to climate change and climate variability . . . Climate change will aggravate the water stress currently faced by some countries, while some countries that currently do not experience water stress will become at risk of water stress . . . (Ch. 7 of the Impacts, Adaptation, and Vulnerability report).
And water would only be part of it: “Agricultural production and food security (including access to food) in many African countries and regions are likely to be severely compromised by climate change and climate variability . . .” (ibid.)

So let’s get this straight: Failure to mitigate climate change will screw Africa, big-time. It will undermine, perhaps entirely destroy, any benefits purchased by the $1.3 billion per year that Hillerbrand and Ghil cast fallaciously as an
alternative investment. Climate-change mitigation is a precondition for meaningful, long-term aid to Africa, not an alternative to such aid. The whole idea that mitigation would rob the Africans is thus cockamamie, backwards-ass. In fact, one of the nastier tensions in the mitigation debate these days is that the up-front costs of mitigation (usually assumed to be positive) must be borne by strong economies like those of the US, Europe, Japan, and China, while the benefits are going to be greatest for Africa and the other poorer, “southern” regions of the world that are threatened most by climate change to begin with. Hillerbrand and Ghil fantasize that mitigation would steal well-being from the Africans: in reality, the G-20 are quietly worried that mitigation is a giveaway to the Africans.

Another fundamentally wrong, wrong, wrong thing about the either/or notion of climate mitigation that justifies Hillerbrand and Ghil’s whole exercise is the idea that the harms which climate-change mitigation seeks to prevent are
distant harms, that mitigation is about “the well-being of future generations” (2133). That is only partly true: yes, what we do today will affect human life for many centuries to come, but the years 2050 (which many of us will live to see) and 2100 (which many of our grandchildren will live to see) are the most commonly referenced points in climate and mitigation scenarios. And let’s not forget Africa, of which the IPCC says: “Changes in a variety of ecosystems are already being detected, particularly in southern African ecosystems, at a faster rate than anticipated . . . ” (ibid., emphasis added). So this is about the future starting now, not the future starting after we are all long dead.

Fortunately, the things we need to do to prevent climate change from becoming disastrous are much the same things we need to do to save ourselves from several other catastrophes: ocean acidification, resource exhaustion, soil death, more. There is therefore no moral puzzle at all about mitigating climate change. Urgent altruism aligns with long-term altruism; self-interest aligns with intergenerational interest; climate mitigation aligns with numerous other benefits; long-term sanity may even, for a miracle, align with economics.

But let’s grant for the sake of the argument Hillerbrand and Ghil’s assumption that mitigation is dilemmic, and see how they deal.

I note, first, that on their first page they subtly
minimize the climate threat, stating that climate change will last “for decades to come.” Decades? Try centuries, based on the long residence time of CO2 in the atmosphere. The Intergovernmental Panel on Climate Change stated in 2007 that “anthropogenic climate change will persist for many centuries.”

On the second page (p. 2133), they set up a classic
straw man: “If there is a moral obligation to preserve the climate in its present state, where does it stem from?”

But everyone who knows the science understands that we cannot “preserve the climate in its present state.” That ship has already sailed. The climate has already changed. It is changing all the time and will continue to change. If we were to zero out our greenhouse emissions tomorrow, simply stop emitting altogether, anthropogenic climate change would continue for centuries, albeit at a greatly reduced rate and to a less extreme endpoint. The question is not whether “there is a moral obligation to preserve the climate in its present state”: it is whether there is a moral obligation to do our best to prevent climate from changing uncontrollably and irrevocably to a
much different state.

Hillerbrand and Ghil end their article with the rather nonspecific and uncontroversial suggestion that we give scientific prognoses a major role in decisions about climate mitigation while acknowledging that such prognoses, “taken on their own, give no sufficient reasons for acting or not acting, this way or the other” (2137). Well, no duh!

It is hard to see how we could apply Expected Utility Theory (EUT) to questions of mitigation even if we wanted to:

First, because it depends on a bloodless pseudo-math that will never move millions or billions of people to change their lives. Perhaps Hillerbrand and Ghil assume, autocratically, that all important change will be imposed from the top down by an expertocracy: they seem to, but don’t say this explicitly.

Second, EUT won’t work on climate for the reason Hillerbrand and Ghil acknowledge on p. 2135: “assigning actual likelihood values to expected impacts on human welfare [in EUT] is often difficult or even impossible with the current state of knowledge” (or, I would add, any plausible future state of knowledge). In other words, they would have us adopt a quantitative method without having access to the required quantities. This is better known as “guessing.”

Third, because of EUT’s admitted anthropocentrism (“valuing the environment solely as a basic resource for humanity, as done in the present paper”—p. 2135). Hillerbrand and Ghil exclude any idea that the nonhuman (or at least non-“sentient”) world is sacred and that we have any obligation to steward it. If humans can be fine without giant redwoods, then the redwoods can go. In Hillerbrand and Ghil’s vocabulary, only “environmentalists” will bemoan this outcome very much. Jesus, have these people never gone camping? The idea of valuing the non-human is deep-rooted in hundreds of millions of people, if not billions. How can it “promote overall human welfare,” Hillerbrand and Ghil’s ultimate value, to discard a value cherished deeply by a large portion of humans?

No: it won’t do. There is some hope in moral common sense, philosophically raggedy-ass though it may be: there is no hope in the robotic gameboarding of Expected Utility Theory. Common fear of total disaster, common care for the fate of our descendants, common devotion to the non-human as well as to the human—love, in fact, love for land, for life, for people, for non-people, for children: this is the only hope, unless one assumes (as I think Hillerbrand and Ghil do) that paltering, marginal actions are really all that’s going to be necessary . . .

I see it now. Full circle. The issue-minimizing and straw-manning that Hillerbrand and Ghil deploy early in their piece lower the bar enough so that their pet ethical calculus, the Expected Utility Theory, can hop over it. If the question were only whether we are obliged to make a feint at attempting the impossible (i.e., “preserve climate in its present state”), and that over a scale of mere “decades,” then a morally contorted pseudo-quantitative system operating on admittedly insufficient data would suffice. Because accomplishing next to nothing would really be OK. If you’re selling plastic sporks, you want people to think they’re going to be eating pudding, not steak.

Utilitarianism is a spork.

----------------------------------

[1] As of 2007, the American Friends Service Committee calculated that the US was spending $720 million per day on the Iraq War (
reported in the Washington Post). That’s $1.3 billion every 1.8 days.

----------------------------------

I will e-mail a PDF of the Hillerbrand and Ghil article to anyone who writes me at lnpgilman [ a t ] wildblue [ d o t ] net to request a copy.