메인 Bully for Brontosaurus: Reflections in Natural History

Bully for Brontosaurus: Reflections in Natural History

,
0 / 0
이 책이 얼마나 마음에 드셨습니까?
파일의 품질이 어떻습니까?
책의 품질을 평가하시려면 책을 다운로드하시기 바랍니다
다운로드된 파일들의 품질이 어떻습니까?
년:
1991
출판사:
W. W. Norton & Company, Inc.
언어:
english
ISBN 13:
9780393308570
파일:
EPUB, 2.16 MB
다운로드 (epub, 2.16 MB)

귀하의 관심을 살 수 있습니다 Powered by Rec2Me

 

주로 사용되는 용어

 
0 comments
 

To post a review, please sign in or sign up
책에 대한 리뷰를 작성하거나 귀하의 독서 경험을 공유할 수 있습니다. 다른 독자들이 귀하가 읽은 책에 대한 의견에 귀를 기울일 것입니다. 개인적으로 책이 마음에 들었거나 그렇지 않았거나 정직하고 상세한 호평은 다른 독자들이 자신에게 적합한 책을 찾는데 도움이 됩니다.
1

Bully Me

년:
2014
언어:
english
파일:
AZW3 , 260 KB
0 / 0
2

Bully Beef & Balderdash: Some Myths of the AIF Examined and Debunked

년:
2015
언어:
english
파일:
PDF, 50.39 MB
0 / 0
Bully for Brontosaurus





BY THE SAME AUTHOR


Ontogeny and Phylogeny

Ever Since Darwin

The Panda’s Thumb

The Mismeasure of Man

Hen’s Teeth and Horse’s Toes

The Flamingo’s Smile

An Urchin in the Storm

Time’s Arrow, Time’s Cycle

Illuminations (with R. W. Purcell)

Wonderful Life

Finders, Keepers: Eight Collectors (with R. W. Purcell)





Bully for Brontosaurus


Reflections in Natural History





Stephen Jay Gould


W.W.NORTON & COMPANY

NEW YORK LONDON





Cover design by Mike McIver



Cover painting by C.R. Knight, Brontosaurus

Courtesy Department of Library Services, American Museum of Natural History. Neg. trans. no. 2417 (3)

Copyright © 1991 by Stephen Jay Gould

All rights reserved.



First published as a Norton 1992



Library of Congress Cataloging-in-Publication Data

Gould, Stephen Jay.

Bully for brontosaurus : reflections in natural history / Stephen Jay Gould.

p. cm.

1. Natural history—Popular works. 2. Evolution—Popular works. I. Title.

QH45.5.G68 1991

508—dc20 91-6916

ISBN: 978-0-393-30857-0



W. W. Norton & Company, Inc.

500 Fifth Avenue, New York, N.Y. 10110

www.wwnorton.com



W. W. Norton & Company Ltd.

Castle House, 75/76 Wells Street, London WIT 3QT

9 0





Pleni sunt coeli

et terra

gloria eius.





Hosanna in excelsis.





Contents


Prologue





1 | HISTORY IN EVOLUTION



1 George Canning’s Left Buttock and the Origin of Species



2 Grimm’s Greatest Tale



3 The Creation Myths of Cooperstown



4 The Panda’s Thumb of Technology





2 | DINOMANIA



5 Bully for Brontosaurus



6 The Dinosaur Rip-off





3 | ADAPTATION



7 Of Kiwi Eggs and the Liberty Bell



8 Male Nipples and Clitoral Ripples



9 Not Necessarily a Wing





4 | FADS AND FALLACIES



10 The Case of the Creeping Fox Terrier Clone



11 Life’s Little Joke



12 The Chain of Reason versus the Chain of Thumbs





5 | ART AND SCIENCE



13 Madame Jeanette



14 Red Wings in the Sunset



15 Petrus Camper’s Angle



16 Literary Bias on the Slippery Slope





6 | DOWN UNDE; R



17 Glow, Big Glowworm



18 To Be a Platypus



19 Bligh’s Bounty



20 Here Goes Nothing





7 | INTELLECTUAL BIOGRAPHY



Biologists

21 In a Jumbled Drawer



22 Kropotkin Was No Crackpot



23 Fleeming Jenkin Revisited





Physical Scientists

24 The Passion of Antoine Lavoisier



25 The Godfather of Disaster





8 | EVOLUTION AND CREATION



The World of T. H. Huxley

26 Knight Takes Bishop?



27 Genesis and Geology





Scopes to Scalia

28 William Jennings Bryan’s Last Campaign



29 An Essay on a Pig Roast



30 Justice Scalia’s Misunderstanding





9 | NUMBERS AND PROBABILITY



31 The Streak of Streaks



32 The Median Isn’t the Message



33 The Ant and the Plant





10 | PLANETS AS PERSONS



34 The Face of Miranda



35 The Horn of Triton





Bibliography





Prologue




IN FRANCE, they call this genre vulgarisation—but the implications are entirely positive. In America, we call it “popular (or pop) writing” and its practitioners are dubbed “science writers” even if, like me, they are working scientists who love to share the power and beauty of their field with people in other professions.

In France (and throughout Europe), vulgarisation ranks within the highest traditions of humanism, and also enjoys an ancient pedigree—from St. Francis communing with animals to Galileo choosing to write his two great works in Italian, as dialogues between professor and students, and not in the formal Latin of churches and universities. In America, for reasons that I do not understand (and that are truly perverse), such writing for nonscientists lies immured in deprecations—“adulteration,” “simplification,” “distortion for effect,” “grandstanding,” “whiz-bang.” I do not deny that many American works deserve these designations—but poor and self-serving items, even in vast majority, do not invalidate a genre. “Romance” fiction has not banished love as a subject for great novelists.

I deeply deplore the equation of popular writing with pap and distortion for two main reasons. First, such a designation imposes a crushing professional burden on scientists (particularly young scientists without tenure) who might like to try their hand at this expansive style. Second, it denigrates the intelligence of millions of Americans eager for intellectual stimulation without patronization. If we writers assume a crushing mean of mediocrity and incomprehension, then not only do we have contempt for our neighbors, but we also extinguish the light of excellence. The “perceptive and intelligent” layperson is no myth. They exist in millions—a low percentage of Americans perhaps, but a high absolute number with influence beyond their proportion in the population. I know this in the most direct possible way—by thousands of letters received from nonprofessionals during my twenty years of writing these essays, and particularly from the large number written by people in their eighties and nineties, and still striving, as intensely as ever, to grasp nature’s richness and add to a lifetime of understanding.

We must all pledge ourselves to recovering accessible science as an honorable intellectual tradition. The rules are simple: no compromises with conceptual richness; no bypassing of ambiguity or ignorance; removal of jargon, of course, but no dumbing down of ideas (any conceptual complexity can be conveyed in ordinary English). Several of us are pursuing this style of writing in America today. And we enjoy success if we do it well. Thus, our primary task lies in public relations: We must be vigorous in identifying what we are and are not, uncompromising in our claims to the humanistic lineages of St. Francis and Galileo, not to the sound bites and photo ops in current ideologies of persuasion—the ultimate in another grand old American tradition (the dark side of anti-intellectualism, and not without a whiff of appeal to the unthinking emotionalism that can be a harbinger of fascism).

Humanistic natural history comes in two basic lineages. I call them Franciscan and Galilean in the light of my earlier discussion. Franciscan writing is nature poetry—an exaltation of organic beauty by corresponding choice of words and phrase. Its lineage runs from St. Francis to Thoreau on Walden Pond, W. H. Hudson on the English downs, to Loren Eiseley in our generation. Galilean composition delights in nature’s intellectual puzzles and our quest for explanation and understanding. Galileans do not deny the visceral beauty, but take greater delight in the joy of causal comprehension and its powerful theme of unification. The Galilean (or rationalist) lineage has roots more ancient than its eponym—from Aristotle dissecting squid to Galileo reversing the heavens, to T. H. Huxley inverting our natural place, to P. B. Medawar dissecting the follies of our generation.

I love good Franciscan writing but regard myself as a fervent, unrepentant, pure Galilean—and for two major reasons. First, I would be an embarrassing flop in the Franciscan trade. Poetic writing is the most dangerous of all genres because failures are so conspicuous, usually as the most ludicrous form of purple prose (see James Joyce’s parody, cited in Chapter 17). Cobblers should stick to their lasts and rationalists to their measured style. Second, Wordsworth was right. The child is father to the man. My youthful “splendor in the grass” was the bustle and buildings of New York. My adult joys have been walks in cities, amidst stunning human diversity of behavior and architecture—from the Quirinal to the Piazza Navona at dusk, from the Georgian New Town to the medieval Old Town of Edinburgh at dawn—more than excursions in the woods. I am not insensible to natural beauty, but my emotional joys center on the improbable yet sometimes wondrous works of that tiny and accidental evolutionary twig called Homo sapiens. And I find, among these works, nothing more noble than the history of our struggle to understand nature—a majestic entity of such vast spatial and temporal scope that she cannot care much for a little mammalian afterthought with a curious evolutionary invention, even if that invention has, for the first time in some four billion years of life on earth, produced recursion as a creature reflects back upon its own production and evolution. Thus, I love nature primarily for the puzzles and intellectual delights that she offers to the first organ capable of such curious contemplation.

Franciscans may seek a poetic oneness with nature, but we Galilean rationalists have a program of unification as well—nature made mind and mind now returns the favor by trying to comprehend the source of production.

This is the fifth volume of collected essays from my monthly series, “This View of Life,” now approaching two hundred items over eighteen years in Natural History magazine (the others, in order, are Ever Since Darwin, The Panda’s Thumb, Hen’s Teeth and Horse’s Toes, and The Flamingo’s Smile). The themes may be familiar (with a good dollop of novelty, I trust), but the items are mostly new (and God has never left his dwelling place in the details).

Against a potential charge of redundancy, may I advance the immodest assertion that this volume is the best of the five. I think that I have become a better writer by monthly practice (I sometimes wish that all copies of Ever Since Darwin would self-destruct), and I have given myself more latitude of selection and choice in this volume. (The previous four volumes discarded only a turkey or two and then published all available items in three years of essays. This volume, covering six years of writing, presents the best, or rather the most integrated, thirty-five pieces from more than sixty choices.)

These essays, while centered on the enduring themes of evolution and the innumerable, instructive oddities of nature (frogs that use their stomachs as brood pouches, the gigantic eggs of Kiwis, an ant with a single chromosome), also record the specific passage of six years since the fourth volume. I have marked the successful completion of a sixty-year battle against creationism (since the Scopes trial of 1925) in our resounding Supreme Court victory of 1987 (see essays under “Scopes to Scalia”), the bicentennial of the French revolution (in an essay on Lavoisier, most prominent scientific victim of the Reign of Terror), and the magnificent completion of our greatest technical triumph in Voyager’s fly-by and photography of Uranus and Neptune (Essays 34 and 35). I also record, as I must, our current distresses and failures—the sorry state of science education (approached, as is my wont, not tendentiously, abstractly, and head-on, but through byways that sneak up on generality—fox terriers and textbook copying, or subversion of dinomania for intellectual benefit), and a sad epilogue on the extinction, between first writing and this republication, of the stomach-brooding frog.

Yet I confess that my personal favorites usually treat less immediate, even obscure, subjects—especially when correction of the errors that confined them to ridicule or obscurity retells their stories as relevant and instructive today. Thus, I write about Abbot Thayer’s theory that flamingos are red to hide them from predators in the sunset, Petrus Camper’s real intent (criteria for art) in establishing a measure later used by scientific racists, the admirable side of William Jennings Bryan and the racist nonsense in the text that John Scopes used to teach evolution, the actual (and much more interesting) story behind the heroic, cardboard version of the Huxley-Wilberforce debate of 1860.

For what it’s worth, my own favorite is Essay 21 on N. S. Shaler and William James (I won’t reveal my vote for the worst essays—especially since they have been shredded in my mental refuse bin and will not be included in these volumes). At least Essay 21 best illustrates my favorite method of beginning with something small and curious and then working outward and onward by a network of lateral connections. I found the fearful letter of Shaler to Agassiz in a drawer almost twenty years ago. I always knew that I would find a use for it someday—but I had no inkling of the proper context. A new biography of Shaler led me to explore his relationship with Agassiz. I then discovered the extent of Shaler’s uncritical (and lifelong) fealty by reading his technical papers. At this point, luck intervened. One of my undergraduate advisees told me that William James, as a Harvard undergraduate, had sailed with Agassiz to Brazil on the master’s penultimate voyage. I knew that Shaler and James had been friendly colleagues and intellectual adversaries—and now I had full connectivity in their shared link to Agassiz. But would anything interesting emerge from all these ties? Again, good fortune smiled. James had been critical of Agassiz right from the start—and in the very intellectual arena (contingency versus design in the history of life) that would host their later disagreements as distinguished senior professors. I then found a truly amazing letter from James to Shaler offering the most concise and insightful rebuttal I have ever read to the common misconception—as current today as when James and Shaler argued—that the improbability of our evolution indicates divine intent in our origin. James’s document—also a brilliant statement on the general nature of probability—provided a climax of modern relevance for a story that began with an obscure note lying undiscovered in a drawer for more than a hundred years. Moreover, James’s argument allowed me to resolve the dilemma of the museum janitor, Mr. Eli Grant, potential victim of Shaler’s cowardly note—so the essay ends by using James’s great generality to solve the little mystery of its beginning, a more satisfactory closure (I think) than the disembodied abstraction of James’s brilliance.

Finally, and now thrice lucky, I received two years later a fascinating letter from Jimmy Carter presenting a theological alternative to the view of contingency and improbability in human evolution advanced in my last book, Wonderful Life. Carter’s argument, though more subtle and cogent than Shaler’s, follows the same logic—and James’s rebuttal has never been bettered or more apropos. And so, by presidential proclamation, I had an epilogue that proved the modern relevance of Shaler’s traditionalism versus James’s probing.

Some people have seen me as a polymath, but I insist that I am a tradesman. I admit to a broad range of explicit detail, but all are chosen to illustrate the common subjects of evolutionary change and the nature of history. And I trust that this restricted focus grants coherence and integration to an overtly disparate range of topics. The bullet that hit George Canning in the ass really is a vehicle for discussing the same historical contingency that rules evolution. My sweet little story about nostalgia at the thirtieth reunion of my All-City high school chorus is meant to be a general statement (bittersweet in its failure to resolve a cardinal dichotomy) about the nature of excellence. The essay on Joe DiMaggio’s hitting streak is a disquisition on probability and pattern in historical sequences; another on the beginnings of baseball explores creation versus evolution as primal stories for the origin of any object or institution. And Essay 32, the only bit I have ever been moved to write about my bout with cancer, is not a confessional in the personal mode, but a general statistical argument about the nature of variation in populations—the central topic of all evolutionary biology.

A final thought on Franciscans and Galileans in the light of our environmental concerns as a tattered planet approaches the millennium (by human reckoning—as nature, dealing in billions, can only chuckle). Franciscans engage the glory of nature by direct communion. Yet nature is so massively indifferent to us and our suffering. Perhaps this indifference, this majesty of years in uncaring billions (before we made a belated appearance), marks her true glory. Omar Khayyám’s old quatrain grasped this fundamental truth (though he should have described his Eastern hotel, his metaphor for the earth, as grand rather than battered):





Think, in this battered caravanserai

Whose portals are alternate night and day,

How sultan after sultan with his pomp

Abode his destined hour, and went his way.



The true beauty of nature is her amplitude; she exists neither for nor because of us, and possesses a staying power that all our nuclear arsenals cannot threaten (much as we can easily destroy our puny selves).

The hubris that got us into trouble in the first place, and that environmentalists seek to avoid as the very definition of their (I should say our) movement, often creeps back in an unsuspected (and therefore potentially dangerous) form in two tenets frequently advanced by “green” movements: (1) that we live on a fragile planet subject to permanent ruin by human malfeasance; (2) that humans must act as stewards of this fragility in order to save our planet.

We should be so powerful! (Read this sentence with my New York accent as a derisive statement about our false sense of might, not as a literal statement of desire.) For all our mental and technological wizardry, I doubt that we can do much to derail the earth’s history in any permanent sense by the proper planetary time scale of millions of years. Nothing within our power can come close to conditions and catastrophes that the earth has often passed through and beyond. The worst scenario of global warming under greenhouse models yields an earth substantially cooler than many happy and prosperous times of a prehuman past. The megatonnage of the extraterrestrial impact that probably triggered the late Cretaceous mass extinction has been estimated at 10,000 times greater than all the nuclear bombs now stockpiled on earth. And this extinction, wiping out some 50 percent of marine species, was paltry compared to the granddaddy of all—the Permian event some 225 million years ago that might have dispatched up to 95 percent of species. Yet the earth recovered from these superhuman shocks, and produced some interesting evolutionary novelties as a result (consider the potential for mammalian domination, including human emergence, following the removal of dinosaurs).

But recovery and restabilization occur at planetary, not human, time scales—that is, millions of years after the disturbing event. At this scale, we are powerless to harm; the planet will take care of itself, our puny foolishnesses notwithstanding. But this time scale, though natural for planetary history, is not appropriate in our legitimately parochial concern for our own species, and the current planetary configurations that now support us. For these planetary instants—our millennia—we do hold power to impose immense suffering (I suspect that the Permian catastrophe was decidedly unpleasant for the nineteen of twenty species that didn’t survive).

We certainly cannot wipe out bacteria (they have been the modal organisms on earth right from the start, and probably shall be until the sun explodes); I doubt that we can wreak much permanent havoc upon insects as a whole (whatever our power to destroy local populations and species). But we can surely eliminate our fragile selves—and our well-buffered earth might then breathe a metaphorical sigh of relief at the ultimate failure of an interesting but dangerous experiment in consciousness. Global warming is worrisome because it will flood our cities (built so often at sea level as ports and harbors), and alter our agricultural patterns to the severe detriment of millions. Nuclear war is an ultimate calamity for the pain and death of billions, and the genetic maiming of millions in future generations.

Our planet is not fragile at its own time scale, and we, pitiful latecomers in the last microsecond of our planetary year, are stewards of nothing in the long run. Yet no political movement is more vital and timely than modern environmentalism—because we must save ourselves (and our neighbor species) from our own immediate folly. We hear so much talk about an environmental ethic. Many proposals embody the abstract majesty of a Kantian categorical imperative. Yet I think that we need something far more grubby and practical. We need a version of the most useful and ancient moral principle of all—the precept developed in one form or another by nearly every culture because it acts, in its legitimate appeal to self-interest, as a doctrine of stability based upon mutual respect. No one has ever improved upon the golden rule. If we execute such a compact with our planet, pledging to cherish the earth as we would wish to be treated ourselves, she may relent and allow us to muddle through. Such a limited goal may strike some readers as cynical or blinkered. But remember that, to an evolutionary biologist, persistence is the ultimate reward. And human brainpower, for reasons quite unrelated to its evolutionary origin, has the damnedest capacity to discover the most fascinating things, and think the most peculiar thoughts. So why not keep this interesting experiment around, at least for another planetary second or two?





1 | History in Evolution





1 | George Canning’s Left Buttock and the Origin of Species




I KNOW the connection between Charles Darwin and Abraham Lincoln. They conveniently contrived to enter the world on the same day, February 12, 1809, thus providing forgetful humanity with a mnemonic for ordering history. (Thanks also to John Adams and Thomas Jefferson for dying on the same momentous day, July 4, 1826, exactly fifty years after our nation’s official birthdate.)

But what is the connection between Charles Darwin and Andrew Jackson? What can an English gentleman who mastered the abstractions of science hold in common with Old Hickory, who inaugurated the legend (later exploited by Lincoln) of the backwoodsman with little formal education fighting his way to the White House? (Jackson was born on the western frontier of the Carolinas in 1767, but later set up shop in the pioneer territory of Nashville.) This more difficult question requires a long string of connections more worthy of Rube Goldberg than of logical necessity. But let’s have a try, in nine easy steps.

1. Andy Jackson, as a result of his military exploits in and around the ill-fated War of 1812, became a national figure, and ultimately, on this basis, a presidential contender. In a conflict conspicuously lacking in good news, Jackson provided much solace by winning the Battle of New Orleans, our only major victory on land after so many defeats and stalemates. With help from the privateer Jean Lafitte (who was then pardoned by President Madison but soon resumed his old ways), Jackson decisively defeated the British forces on January 8, 1815, and compelled their withdrawal from Louisiana. Cynics often point out, perhaps ungenerously, that Jackson’s victory occurred more than two weeks after the war had officially ended, but no one had heard the news down in the bayous because the treaty had been signed in Ghent and word then traveled no faster than ship.

2. When we were about to withdraw from Vietnam and acknowledge (at least privately) that the United States had lost the war, some supporters of that venture (I was not among them) drew comfort from recalling that, patriotic cant aside, this was not our first military defeat. Polite traditions depict the War of 1812 as a draw, but let’s face it, basically we lost—at least in terms of the larger goal espoused by hawks of that era: the annexation of Canada, at least in part. But we did manage to conserve both territory and face, an important boon to America’s future and a crucial ingredient in Jackson’s growing reputation. Washington, so humiliated just a few months before when British troops burned the White House and the Capitol, rejoiced in two items of news, received in early 1815 in reverse order of their actual occurrence: Jackson’s victory at New Orleans, and the favorable terms of the Treaty of Ghent, signed on December 24, 1814.

3. The Treaty of Ghent restored all national boundaries to their positions before the war; thus, we could claim that we had lost not an inch of territory, even though expansion into Canada had been the not-so-hidden aim of the war’s promoters. The treaty provided for commissions of arbitration to settle other points of dispute between the United States and Canada; all remaining controversies were negotiated peacefully under these provisions, including the establishment of our unfortified boundary, the elimination of naval forces from the Great Lakes, and the settlement of the Saint Lawrence boundary. Thomas Boylston Adams, descendant of John Quincy Adams (who negotiated and signed the treaty), recently wrote of that exemplary document (in his wonderful column “History Looks Ahead,” appearing twice a month in the Boston Globe): “The treaty…ended a war that never should have been begun. Yet its consummation was unbounded good. The peace then confirmed…has never been broken. Its bounty has been the cheerful coexistence of two friendly nations divided by nothing more tangible than an invisible line that runs for 3,000 miles undefended by armed men or armaments.”

4. If the war had not ended, fortunately for us, on such an upbeat, Andy Jackson’s belated victory at New Orleans might have emerged as a bitter joke rather than a symbol of (at least muted) success—and Jackson, deprived of status as a military hero, might never have become president. But why did Britain, in a fit of statesmanship, agree to such a conciliatory treaty, when they held the upper hand militarily? The reasons are complex and based, in part, on expediency (the coalition that had exiled Napoleon to Elba was coming apart, and more troops might soon be needed in Europe). But much credit must also go to the policies of Britain’s remarkable foreign secretary, Robert Stewart, Viscount Castlereagh. In a secret dispatch sent to the British minister in Washington in 1817, Castlereagh set out his basic policy for negotiation, a stance that had guided the restructuring of Europe at the Congress of Vienna, following the final defeat of Napoleon: “The avowed and true policy of Great Britain in the existing State of the World is to secure if possible, for all states a long interval of repose.”

Three years earlier, Castlereagh had put flesh on these brave words by helping to break the deadlock at Ghent and facilitate a peace treaty that did not take all that Britain could have demanded, thereby leaving the United States with both pride and flexibility for a future and deeper peace with Britain. Negotiations had gone badly at Ghent; anger and stalemate ruled. Then, on his way to Vienna, Castlereagh stopped for two days in Ghent, where, in secret meetings with his negotiators, he advocated conciliation and helped to break the deadlock.

5. We must thank the fortunate tides of history that Castlereagh, rather than his counterpart and rival, the hawkish and uncompromising George Canning, was presiding over Britain’s foreign affairs in 1814. (And so you see, dear reader, we are finally getting to Mr. Canning’s rear end, as promised in the title.) The vagaries of a key incident in 1809 led to this favorable outcome. Canning, then foreign secretary, had been pushing for Castlereagh’s ouster as secretary of war. Castlereagh had sent a British expedition against Napoleon’s naval base at Antwerp, but nature had intervened (through no fault of Castlereagh’s), and the troops were boxed in on the island of Walcheren, dying in droves of typhoid fever. Canning used this disaster to press his advantage.

Meanwhile (this does get complicated), the prime minister, the duke of Portland, suffered a paralytic stroke and eventually had to resign. In the various reshufflings and explanations that follow such an event, Perceval, the new prime minister, showed Castlereagh some of Canning’s incriminating letters. Castlereagh did not challenge Canning’s right to lobby for his removal, but he exploded in fury at Canning’s apparent secrecy in machination. Canning, for his part (and not without justice), replied that he had urged open confrontation of the issue, but that higher-ups (including the king) had imposed secrecy, hoping to paper over the affair and somehow preserve the obvious talents of both men in government.

Castlereagh, to say the least, was not satisfied and, in the happily abandoned custom of his age, insisted upon a duel. The two men and their seconds met on Putney Heath at 6 A.M. on September 21. They fired a first round to no effect, but Castlereagh insisted on a second, of much greater import. Castlereagh was spared the fate of Alexander Hamilton by inches, as Canning’s bullet removed a button from his coat but missed his person. Canning was not so fortunate; though more embarrassed than seriously injured, he took Castlereagh’s second bullet in his left buttock. (Historians have tended to euphemism at this point. The latest biography of Castlereagh holds that Canning got it “through the fleshy part of the thigh,” but I have it on good authority that Canning was shot in the ass.) In any case, both men subsequently resigned.

As the world turns and passions cool, both Canning and Castlereagh eventually returned to power. Canning achieved his burning ambition (cause of his machinations against Castlereagh) to become prime minister, if only briefly, in 1827. Castlereagh came back in Canning’s old job of foreign secretary, where he assured the Treaty of Ghent and presided for Britain at the Congress of Vienna.

6. Suppose Canning had fired more accurately and killed Castlereagh on the spot? Canning, or another of his hawkish persuasion, might have imposed stiffer terms upon the United States and deprived Andy Jackson of his hero’s role. More important for our tale, Castlereagh would have been denied the opportunity to die as he actually did, by his own hand, in 1822. Castlereagh had suffered all his life from periods of acute and debilitating “melancholy” and would, today, almost surely be diagnosed as a severe manic depressive. Attacked by the likes of Lord Byron, Shelley, and Thomas Moore for his foreign policies, and suffering from both overwork and parliamentary reverses, Castlereagh became unreasonably suspicious and downright paranoid. He thought that he was being blackmailed for supposed acts of homosexuality (neither the blackmail nor the sexual orientation has ever been proved). His two closest friends, King George IV and the duke of Wellington, failed to grasp the seriousness of his illness and did not secure adequate protection or treatment. On August 12, 1822, though his wife (fearing the worst) had removed all knives and razors from his vicinity, Castlereagh rushed into his dressing room, seized a small knife that had been overlooked, and slit his throat.

7. Yes, we are getting to Darwin, but it takes a while. Point seven is a simple statement of genealogy: Lord Castlereagh’s sister was the mother of Robert FitzRoy, captain of HMS Beagle and host to Charles Darwin on a five-year voyage that bred the greatest revolution in the history of biology.

8. Robert FitzRoy took command of the Beagle at age twenty-three, after the previous captain had suffered a mental breakdown and shot himself. FitzRoy was a brilliant and ambitious man. He had been instructed to take the Beagle on a surveying voyage of the South American coast. But FitzRoy’s own plans extended far beyond a simple mapping trip, for he hoped to set a new standard of scientific observation on a much broader scale. To accomplish his aim, he needed more manpower than the Admiralty was willing to supply. As a person of wealth, he decided to take some extra passengers at his own expense, to beef up the Beagle’s scientific mettle.

A popular scientific myth holds that Darwin sailed on the Beagle as official ship’s naturalist. This is not true. The official naturalist was the ship’s surgeon, Robert McKormick. Darwin, who disliked McKormick and did eventually succeed him as naturalist (after the disgruntled McKormick “invalided out,” to use the euphemism of his time), originally sailed as a supernumerary passenger at FitzRoy’s discretion.

Why, then, did FitzRoy tap Darwin? The obvious answer—that Darwin was a promising young scientist who could aid FitzRoy’s plans for improved observation—may be partly true, but does not get to the heart of FitzRoy’s reasons. First of all, Darwin may have possessed abundant intellectual promise, but he had no scientific credentials when he sailed on the Beagle—a long-standing interest in natural history and bug collecting to be sure, but neither a degree in science nor an intention to enter the profession (he was preparing for the ministry at the time).

FitzRoy took Darwin along primarily for a much different, and personal, reason. As an aristocratic captain, and following the naval customs of his time, FitzRoy could have no social contact with officers or crew during long months at sea. He dined alone and conversed with his men only in an official manner. FitzRoy understood the psychological toll that such enforced solitude could impose, and he remembered the fate of the Beagle’s previous skipper. He decided on a course of action that others had followed in similar circumstances: He decided to take along, at his own expense, a supernumerary passenger to serve, in large part, as a mealtime companion for conversation. He therefore advertised discreetly among his friends for a young man of appropriate social status who could act as both social companion and scientific aid. Charles Darwin, son of a wealthy physician and grandson of the great scholar Erasmus Darwin, fitted the job description admirably.

But most captains did not show such solicitude for their own mental health. Why did FitzRoy so dread the rigors of solitude? We cannot know for sure, but the answer seems to lie, in good part, with the suicide of his uncle, Lord Castlereagh. FitzRoy, by Darwin’s own account, was fearful of a presumed hereditary predisposition to madness, an anxiety that he embodied in the suicide of his famous uncle, whom he so much resembled in looks as well as temperament. Moreover, FitzRoy’s fears proved well founded, for he did break down and temporarily relinquish his command in Valparaiso during a period of overwork and tension. On November 8, 1834, Darwin wrote to his sister Catherine: “We have had some strange proceedings on board the Beagle…Capt. FitzRoy has for the last two months, been working extremely hard and at the same time constantly annoyed…. This was accompanied by a morbid depression of spirits, and a loss of all decision and resolution. The Captain was afraid that his mind was becoming deranged (being aware of his hereditary predisposition)…. He invalided and Wickham was appointed to the command.”

Late in life, and with some hindsight, Darwin mused on the character of Captain FitzRoy in his autobiography:





FitzRoy’s character was a singular one, with many very noble features: he was devoted to his duty, generous to a fault, bold, determined, indomitably energetic, and an ardent friend to all under his sway…. He was a handsome man, strikingly like a gentleman, with highly courteous manners, which resembled those of his maternal uncle, the famous Lord Castlereagh…. FitzRoy’s temper was a most unfortunate one. This was shown not only by passion but by fits of long-continued moroseness…. He was also somewhat suspicious and occasionally in very low spirits, on one occasion bordering on insanity. He was extremely kind to me, but was a man very difficult to live with on the intimate terms which necessarily followed from our messing by ourselves in the same cabin. [Darwin does mean “eating,” and we find no sexual innuendo either here or anywhere else in their relationship.]



I am struck by the similarity, according to Darwin’s description, between FitzRoy and his uncle, Lord Castlereagh, not only in physical characteristics and social training, but especially in the chronicle of a mental history so strongly implying a lifelong pattern of severe manic depression. In other words, I think that FitzRoy was correct in his self-diagnosis of a tendency to hereditary mental illness. Castlereagh’s dramatic example had served him well as a warning, and his decision, so prompted, to take Darwin on the Beagle was history’s reward.

But suppose Canning had killed Castlereagh, rather than just removing a button from his coat? Would FitzRoy have developed so clear a premonition about his own potential troubles without the terrible example of his beloved uncle’s suicide during his most impressionable years (FitzRoy was seventeen when Castlereagh died)? Would Darwin have secured his crucial opportunity if Canning’s bullet had been on the mark?

Tragically, FitzRoy’s premonition eventually came to pass in almost eerie consonance with his own nightmare and memory of Castlereagh. FitzRoy’s later career had its ups and downs. He suffered from several bouts of prolonged depression, accompanied by increasing suspicion and paranoia. In his last post, FitzRoy served as chief of the newly formed Meteorological Office and became a pioneer in weather forecasting. FitzRoy is much admired today for his cautious and excellent work in a most difficult field. But he encountered severe criticism during his own tenure, and for the obvious reason. Weathermen take enough flak today for incorrect predictions. Imagine the greater uncertainties more than a century ago. FitzRoy was stung by criticism of his imprecision. With a healthy mind, he would have parried the blows and come out fighting. But he sank into even deeper despair and eventually committed suicide by slitting his throat on April 20, 1865. Darwin mourned for his former friend (and more recent enemy of evolution), noting the fulfillment of the prophecy that had fostered his own career: “His end,” Darwin wrote, “was a melancholy one, namely suicide, exactly like that of his uncle Ld. Castlereagh, whom he resembled closely in manner and appearance.”

9. Finally, the other short and obvious statement: We must reject the self-serving historical myth that Darwin simply “saw” evolution in the raw when he broke free from the constraints of his culture and came face to face with nature all around the world. Darwin, in fact, did not become an evolutionist until he returned to England and struggled to make sense of what he had observed in the light of his own heritage: of Adam Smith, William Wordsworth, and Thomas Malthus, among others. Nonetheless, without the stimulus of the Beagle, I doubt that Darwin would have concerned himself with the origin of species or even entered the profession of science at all. Five years aboard the Beagle did serve as the sine qua non of Darwin’s revolution in thought.

My chain of argument runs in two directions from George Canning’s left buttock: on one branch, to Castlereagh’s survival, his magnanimous approach to the face-saving Treaty of Ghent, the consequent good feeling that made the Battle of New Orleans a heroic conquest rather than a bitter joke, to Andrew Jackson’s emergence as a military hero and national figure ripe for the presidency; on the other branch, to Castlereagh’s survival and eventual death by his own hand, to the example thus provided to his similarly afflicted nephew Robert FitzRoy, to FitzRoy’s consequent decision to take a social companion aboard the Beagle, to the choice of Darwin, to the greatest revolution in the history of biological thought. The duel on Putney Heath branches out in innumerable directions, but one leads to Jackson’s presidency and the other to Darwin’s discovery.

I don’t want to push this style of argument too far, and this essay is meant primarily as comedy (however feeble the attempt). Anyone can set out a list of contrary proposals. Jackson was a tough customer and might have made his way to the top without a boost from New Orleans. Perhaps FitzRoy didn’t need the drama of Castlereagh’s death to focus a legitimate fear for his own sanity. Perhaps Darwin was so brilliant, so purposeful, and so destined that he needed no larger boost from nature than a beetle collection in an English parsonage.

No connections are certain (for we cannot perform the experiment of replication), but history presents, as its primary fascination, this feature of large and portentous movements arising from tiny quirks and circumstances that appear insignificant at the time but cascade into later, and unpredictable, prominence. The chain of events makes sense after the fact, but would never occur in the same way again if we could rerun the tape of time.

I do not, of course, claim that history contains nothing predictable. Many broad directions have an air of inevitability. A theory of evolution would have been formulated and accepted, almost surely in the mid-nineteenth century, if Charles Darwin had never been born, if only for the simple reason that evolution is true, and not so veiled from our sight (and insight) that discovery could long have tarried behind the historical passage of cultural barriers to perception.

But we are creatures of endless and detailed curiosity. We are not sufficiently enlightened by abstractions devoid of flesh and bones, idiosyncrasies and curiosities. We cannot be satisfied by concluding that a thrust of Western history, and a dollop of geographic separation, virtually guaranteed the eventual independence of the United States. We want to know about the tribulations at Valley Forge, the shape of the rude bridge that arched the flood at Concord, the reasons for crossing out “property” and substituting “pursuit of happiness” in Jefferson’s great document. We care deeply about Darwin’s encounter with Galápagos tortoises and his studies of earthworms, orchids, and coral reefs, even if a dozen other naturalists would have carried the day for evolution had Canning killed Castlereagh, FitzRoy sailed alone, and Darwin become a country parson. The details do not merely embellish an abstract tale moving in an inexorable way. The details are the story itself; the underlying predictability, if discernible at all, is too nebulous, too far in the background, and too devoid of hooks upon actual events to count as an explanation in any satisfying sense.

Darwin, that great beneficiary of a thousand chains of improbable circumstance, came to understand this principle and to grasp thereby the essence of history in its largest domain of geology and life. When America’s great Christian naturalist Asa Gray told Darwin that he was prepared to accept the logic of natural selection but recoiled at the moral implications of a world without divine guidance, Darwin cited history as a resolution. Gray, in obvious distress, had posed the following argument: Science implies lawfulness; laws (like the principle of natural selection) are instituted by God to ensure his benevolent aims in the results of nature; the path of history, however full of apparent sorrow and death, must therefore include purpose. Darwin replied that laws surely exist and that, for all he knew, they might well embody a purpose legitimately labeled divine. But, Darwin continued, laws only regulate the broad outlines of history, “with the details, whether good or bad, left to the working out of what we may call chance.” (Note Darwin’s careful choice of words. He does not mean “random” in the sense of uncaused; he speaks of events so complex and contingent that they fall, by their unpredictability and unrepeatability, into the domain of “what we may call chance.”)

But where shall we place the boundary between lawlike events and contingent details? Darwin presses Gray further. If God be just, Darwin holds, you could not claim that the improbable death of a man by lightning or the birth of a child with serious mental handicaps represents the general and inevitable way of our world (even though both events have demonstrable physical causes). And if you accept “what we may call chance” (the presence of this man under that tree at that moment) as an explanation for a death, then why not for a birth? And if for the birth of an individual, why not for the origin of a species? And if for the origin of a species, then why not for the evolution of Homo sapiens as well?

You can see where Darwin’s chain of argument is leading: Human intelligence itself—the transcendent item that, above all else, supposedly reflected God’s benevolence, the rule of law, and the necessary progress of history—might be a detail, and not the predictable outcome of first principles. I wouldn’t push this argument to an absurd extreme. Consciousness in some form might lie in the realm of predictability, or at least reasonable probability. But we care about details. Consciousness in human form—by means of a brain plagued with inherent paths of illogic, and weighted down by odd and dysfunctional inheritances, in a body with two eyes, two legs, and a fleshy upper thigh—is a detail of history, an outcome of a million improbable events, never destined to repeat. We care about George Canning’s sore behind because we sense, in the cascade of consequences, an analogy to our own tenuous existence. We revel in the details of history because they are the source of our being.





2 | Grimm’s Greatest Tale




WITH THE POSSIBLE EXCEPTION of Eng and Chang, who had no choice, no famous brothers have ever been closer than Wilhelm and Jacob Grimm, who lived and worked together throughout their long and productive lives. Wilhelm (1786–1859) was the prime mover in collecting the Kinder-und Hausmärchen (fables for the home and for children) that have become a pillar and icon of our culture. (Can you even imagine a world without Rapunzel or Snow White?) Jacob, senior member of the partnership (1785–1863), maintained a primary interest in linguistics and the history of human speech. His Deutsche Grammatik, first published in 1819, became a cornerstone for documenting relationships among Indo-European languages. Late in their lives, after a principled resignation from the University of Göttingen (prompted by the king of Hanover’s repeal of the 1833 constitution as too liberal), the brothers Grimm settled in Berlin where they began their last and greatest project, the Deutsches Wörterbuch—a gigantic German dictionary documenting the history, etymology, and use of every word contained in three centuries of literature from Luther to Goethe. Certain scholarly projects are, like medieval cathedrals, too vast for completion in the lifetimes of their architects. Wilhelm never got past D; Jacob lived to see the letter F.

Speaking in Calcutta, during the infancy of the British raj in 1786, the philologist William Jones first noted impressive similarities between Sanskrit and the classical languages of Greece and Rome (an Indian king, or raja, matches rex, his Latin counterpart). Jones’s observation led to the recognition of a great Indo-European family of languages, now spread from the British Isles and Scandinavia to India, but clearly rooted in a single, ancient origin. Jones may have marked the basic similarity, but the brothers Grimm were among the first to codify regularities of change that underpin the diversification of the rootstock into its major subgroups (Romance languages, Germanic tongues, and so on). Grimm’s law, you see, does not state that all frogs shall turn into princes by the story’s end, but specifies the characteristic changes in consonants between Proto-Indo-European (as retained in Latin) and the Germanic languages. Thus, for example, Latin p’s become f’s in Germanic cognates (voiceless stops become voiceless fricatives in the jargon). The Latin plnum becomes “full” (voll, pronounced “foll” in German); piscis becomes “fish” (Fisch in German); and ps becomes “foot” (Fuss in German). (Since English is an amalgam of a Germanic stock with Latin-based imports from the Norman conquest, our language has added Latin cognates to Anglo-Saxon roots altered according to Grimm’s law—plenty, piscine, and podiatry. We can even get both for the price of one in plentiful.)

I first learned about Grimm’s law in a college course more than twenty-five years ago. Somehow, the idea that the compilers of Rapunzel and Rumpelstiltskin also gave the world a great scholarly principle in linguistics struck me as one of the sweetest little facts I ever learned—a statement, symbolic at least, about interdisciplinary study and the proper contact of high and vernacular culture. I have wanted to disgorge this tidbit for years and am delighted that this essay finally provided an opportunity.

A great dream of unification underlay the observations of Jones and the codification of systematic changes by Jacob Grimm. Nearly all the languages of Europe (with such fascinating exceptions as Basque, Hungarian, and Finnish) could be joined to a pathway that spread through Persia all the way to India via Sanskrit and its derivatives. An origin in the middle, somewhere in the Near East, seemed indicated, and such “fossil” Indo-European tongues as Hittite support this interpretation. Whether the languages were spread, as convention dictates, by conquering nomadic tribes on horseback or, as Colin Renfrew argues in his recent book (Archaeology and Language, 1987), more gently and passively by the advantages of agriculture, evidence points to a single source with a complex history of proliferation in many directions.

Might we extend the vision of unity even further? Could we link Indo-European with the Semitic (Hebrew, Arabic) languages of the so-called Afro-Asiatic stock; the Altaic languages of Tibet, Mongolia, Korea, and Japan; the Dravidian tongues of southern India; even to the native Amerindian languages of the New World? Could the linkages extend even further to the languages of southeastern Asia (Chinese, Thai, Malay, Tagalog), the Pacific Islands, Australia, and New Guinea, even (dare one dream) to the most different tongues of southern Africa, including the Khoisan family with its complex clicks and implosions?

Most scholars balk at the very thought of direct evidence for connections among these basic “linguistic phyla.” The peoples were once united, of course, but the division and spread occurred so long ago (or so the usual argument goes) that no traces of linguistic similarity should be left according to standard views about rates of change in such volatile aspects of human culture. Yet a small group of scholars, including some prominent émigrés from the Soviet Union (where theories of linguistic unification are not so scorned), persists in arguing for such linkages, despite acrimonious rebuttal and dismissal from most Western colleagues. One heterodox view tries to link Indo-European with linguistic phyla of the Near East and northern Asia (from Semitic at the southwest, to Dravidian at the southeast, all the way to Japanese at the northeast) by reconstructing a hypothetical ancestral tongue called Nostratic (from the Latin noster, meaning “our”). An even more radical view holds that modern tongues still preserve enough traces of common ancestry to link Nostratic with the native languages of the Americas (all the way to South America via the Eskimo tongues, but excluding the puzzling Na-Dene languages of northwestern America).

The vision is beguiling, but I haven’t the slightest idea whether any of these unorthodox notions has a prayer of success. I have no technical knowledge of linguistics, only a hobbyist’s interest in language. But I can report, from my own evolutionary domain, that the usual biological argument, invoked a priori against the possibility of direct linkage among linguistic phyla, no longer applies. This conventional argument held that Homo sapiens arose and split (by geographical migration) into its racial lines far too long ago for any hope that ancestral linguistic similarities might be retained by modern speakers. (A stronger version held that various races of Homo sapiens arose separately and in parallel from different stocks of Homo erectus, thus putting the point of common linguistic ancestry even further back into a truly inaccessible past. Indeed, according to this view, the distant common ancestor of all modern people might not even have possessed language. Some linguistic phyla might have arisen as separate evolutionary inventions, scotching any hope for theories of unification.)

The latest biological evidence, mostly genetic but with some contribution from paleontology, strongly indicates a single and discrete African origin for Homo sapiens at a date much closer to the present than standard views would have dared to imagine—perhaps only 200,000 years ago or so, with all non-African diversity perhaps no more than 100,000 years old. Within this highly compressed framework of common ancestry, the notion that conservative linguistic elements might still link existing phyla no longer seems so absurd a priori. The idea is worth some serious testing, even if absolutely nothing positive eventually emerges.

This compression of the time scale also suggests possible success for a potentially powerful research program into the great question of historical linkages among modern peoples. Three major and entirely independent sources of evidence might be used to reconstruct the human family tree: (1) direct but limited evidence of fossil bones and artifacts by paleontology and archaeology; (2) indirect but copious data on degrees of genetic relationship among living peoples; (3) relative similarities and differences among languages, as discussed above. We might attempt to correlate these separate sources, searching for similarities in pattern. I am delighted to report some marked successes in this direction (“Reconstruction of Human Evolution: Bringing Together Genetic, Archaeological, and Linguistic Data,” by L. L. Cavalli-Sforza, A. Piazza, P. Menozzi, and J. Mountain, Proceedings of the National Academy of Sciencs, 1988). The reconstruction of the human family tree—its branching order, its timing, and its geography—may be within our grasp. Since this tree is the basic datum of history, hardly anything in intellectual life could be more important.

Our recently developed ability to measure genetic distances for large numbers of protein or DNA sequences provides the keystone for resolving the human family tree. As I have argued many times, such genetic data take pride of place not because genes are “better” or “more fundamental” than data of morphology, geography, and language, but only because genetic data are so copious and so comparable. We all shared a common origin, and therefore a common genetics and morphology, as a single ancestral population some quarter of a million years ago. Since then, differences have accumulated as populations separated and diversified. As a rough guide, the more extensive the measured differences, the greater the time of separation. This correlation between extent of difference and time of separation becomes our chief tool for reconstructing the human family tree.

But this relationship is only rough and very imperfect. So many factors can distort and disrupt a strict correlation of time and difference. Similar features can evolve independently—black skin in Africans and Australians, for example, since these groups stand as far apart genealogically as any two peoples on earth. Rates of change need not be constant. Tiny populations, in particular, can undergo marked increases in rate, primarily by random forces of genetic drift. The best way to work past these difficulties lies in a “brute force” approach: The greater the quantity of measured differences, the greater the likelihood of a primary correlation between time and overall distance. Any single measure of distance may be impacted by a large suite of forces that can disrupt the correlation of time and difference—natural selection, convergence, rapid genetic drift in small populations. But time is the only common factor underlying all measures of difference; when two populations split, all potential measures of distance become free to diverge. Thus, the more independent measures of distance we compile, the more likely we are to recover the only common signal of diversification: time itself. Only genetic data (at least for now) can supply this required richness in number of comparisons.

Genetic data on human differences are flowing in from laboratories throughout the world, and this essay shall be obsolete before it hits the presses. Blood groups provided our first crude insights during the 1960s, and Cavalli-Sforza was a pioneer in these studies. When techniques of electrophoresis permitted us to survey routinely for variation in the enzymes and proteins coded directly by genes, then data on human differences began to accumulate in useful cascades. More recently, our ability to sequence DNA itself has given us even more immediate access to the sources of variation.

The methodologically proper and powerful brute force comparisons are, for the moment, best made by studying differing states and frequencies of genes as revealed in the amino acid sequences of enzymes and proteins. Cavalli-Sforza and colleagues used information from alleles (varying states of genes, as in tall versus short for Mendel’s peas) to construct a tree for human populations least affected by extensive interbreeding. (Few human groups are entirely aboriginal, and most populations are interbred to various degrees, given the two most characteristic attributes of Homo sapiens: wanderlust and vigorous sexuality. Obviously, if we wish to reconstruct the order of diversified branching from a common point of origin, historically mixed populations will confuse our quest. The Cape Colored, living disproof from their own ancestors for the Afrikaner “ideal” of apartheid, would join Khoisan with Caucasian. One town in Brazil might well join everyone.)

Cavalli-Sforza’s consensus tree, based on overall genetic distances among 120 alleles for 42 populations—probably the best we can do for now, based on the maximal amount of secure and consistent information—divides modern humans into seven major groups, as shown in the accompanying chart. Only branching order counts in assessing relative similarity, not the happenstance of alignment along the bottom of the chart. Africans are not closer to Caucasians than to Australians just because the two groups are adjacent; rather, Africans are equally far from all other peoples by virtue of their common branching point with the ancestor of all six additional groups. (Consider the diagram as a mobile, free to rotate about each vertical “string.” We could turn around the entire array of Groups II to VII, placing Australians next to Africans and Caucasians at the far right, without altering the branching order.)

These seven basic groups, established solely on genetic distances, make excellent sense when we consider the geographic distribution of Homo sapiens. Humans presumably evolved in Africa, and the first great split separates Africans from all other groups—representing the initial migration of some Homo sapiens out of the mother continent. The next split separates the coherent region of the Pacific and Southeast Asia from the rest of the world. One group reached Australia and New Guinea, perhaps 40,000 years ago, forming the aboriginal populations of this region. A later division separated the Pacific island peoples (Group VI, including Polynesians, Micronesians, and Melanesians) from the southeastern Asiatics (Group V, including southern Chinese, Thai, Malayan, and Filipino).





Cavalli-Sforza’s consensus tree for the evolutionary relationships of human groups based on overall genetic distances. Postulated relationships among language families match this pattern remarkably well. See text for details. IROMIE WEERAMANTRY, COURTESY OF NATURAL HISTORY.

Meanwhile, the second great branch divided to split the northern Oriental stocks from the Caucasians (Group II, including Europeans, Semitic peoples of southwest Asia, Iranians, and Indians). A second division separated the Native American peoples (Group IV) from the northeast Asian family (Group III, including the Uralic peoples who left Hungarian, Finnish, and Estonian as their non-Indo-European calling cards from invasions into Caucasian territories, and the Altaic peoples of Mongolia, Korea, and Japan).

This good and sensible order indicates that genetic data are not betraying our efforts to reconstruct the human family tree. But Cavalli-Sforza and colleagues go further toward the great promise of extending this correlation between genes and geography to the other great sources of independent information—the geological and linguistic records.

I find the linguistic correlations more exciting than anything else in the work of Cavalli-Sforza and colleagues. Language is so volatile. Conquerors can impose their language as well as their will. Tongues interpenetrate and merge with an explosive ease not granted to genes or morphology. Look at English; look at any of us. I, for example, live in America, the indigenous home of very different people. I speak English, and consider the cathedral of Chârtres the world’s most beautiful building. But my grandparents spoke Hungarian, a non-Indo-European language. And, along with Disraeli, my more distant ancestors were priests in the Temple of Solomon when the physical forebears of the original English people still lived as “brutal savages in an unknown island.” One might have anticipated very little correlation between language and the tree of human ancestry.

Yet the mapping of linguistic upon genetic tree is remarkable in its degree of overlap. Exceptions exist, of course, and for the reasons mentioned above. Ethiopians speak an Afro-Asiatic language (in the phylum of Hebrew and Arabic), but belong to the maximally distant African group by genes. The Tibetan language links with Chinese in Group V, although the Tibetan people belong with northeast Asians in Group III. But Tibetans migrated from the steppes north of China, and Ethiopians have maintained primary contact and admixture with Semitic speakers for millennia. The correlations, however, are striking. Each genetic group also defines either a single linguistic phylum or a few closely related phyla. The Pacific island languages, with their mellifluous vowels and nearly nonexistent consonants, define Group VI almost as well as the genetic distances. The Indo-European languages set the borders of Caucasian affinity, while the other major tongues of Caucasian peoples (Afro-Asiatic of the Semitic group) belong to a related linguistic phylum.

I am especially intrigued that the heterodox hypotheses for linkages among linguistic phyla, and for potential reconstructions of human languages even closer to the original tongue, follow the genetic connections so faithfully. Nostratic would link Groups II and III. The even more heterodox connection of Nostratic with Amerindian tongues would include Group IV as well. Note that Groups II to IV form a coherent limb of the human family tree. The Tower of Babel may emerge as a strikingly accurate metaphor. We probably did once speak the same language, and we did diversify into incomprehension as we spread over the face of the earth. But this original tongue was not an optimal construction given by a miracle to all people. Our original linguistic unity is only historical happenstance, not crafted perfection. We were once a small group of Africans, and the mother tongue is whatever these folks said to each other, not the Holy Grail.

This research has great importance for the obvious and most joyously legitimate parochial reason—our intense fascination with ourselves and the details of our history. We really do care that our species arose closer to 250,000 than to 2 million years ago, that Basque is the odd man out of European languages, and that the peopling of the Americas is not mysterious for its supposed “delay,” but part of a regular process of expansion from an African center, and basically “on time” after all.

But I also sense a deeper importance in this remarkable correlation among all major criteria for reconstructing our family tree. This high correspondence can only mean that a great deal of human diversity, far more than we ever dared hope, achieves a remarkably simple explanation in history itself. If you know when a group split off and where it spread, you have the basic outline (in most cases) of its relationships with others. The primary signature of time and history is not effaced, or even strongly overlain in most cases, by immediate adaptation to prevailing circumstances or by recent episodes of conquest and amalgamation. We remain the children of our past—and we might even be able to pool our differences and to extract from inferred pathways of change a blurred portrait of our ultimate parents.

The path is tortuous and hard to trace, as the sister of the seven ravens learned when she went from the sun to the moon to the glass mountain in search of her brothers. History is also a hard taskmaster, for she covers her paths by erasing so much evidence from her records—as Hansel and Great discovered when birds ate their Ariadne’s thread of bread crumbs. Yet the potential rewards are great, for we may recover the original state so hidden by our later changes—the prince behind the frog or the king that became the bear companion of Snow White and Rose Red. And the criteria that may lead to success are many and varied—not only the obvious data of genes and fossils but also the clues of language. For we must never doubt the power of names, as Rumpelstiltskin learned to his sorrow.





3 | The Creation Myths of Cooperstown




YOU MAY EITHER LOOK upon the bright side and say that hope springs eternal or, taking the cynic’s part, you may mark P.T. Barnum as an astute psychologist for his proclamation that suckers are born every minute. The end result is the same: You can, Honest Abe notwithstanding, fool most of the people all of the time. How else to explain the long and continuing compendium of hoaxes—from the medieval shroud of Turin to Edwardian Piltdown Man to an ultramodern array of flying saucers and astral powers—eagerly embraced for their consonance with our hopes or their resonance with our fears.

Some hoaxes make a sufficient mark upon history that their products acquire the very status initially claimed by fakery—legitimacy (although as an object of human or folkloric, rather than natural, history; I once held the bones of Piltdown Man and felt that I was handling an important item of Western culture).

The Cardiff Giant, the best American entry for the title of paleontological hoax turned into cultural history, now lies on display in a shed behind a barn at the Farmer’s Museum in Cooperstown, New York. This gypsum man, more than ten feet tall, was “discovered” by workmen digging a well on a farm near Cardiff, New York, in October 1869. Eagerly embraced by a gullible public, and ardently displayed by its creators at fifty cents a pop, the Cardiff Giant caused quite a brouhaha around Syracuse, and then nationally, for the few months of its active life between exhumation and exposure.





A broadsheet from 1869 giving vital statistics of the Cardiff Giant. NEW YORK STATE HISTORICAL ASSOCIATION, COOPERSTOWN, NY.





The Cardiff Giant as now on display in the Farmer’s Museum in Cooperstown, New York. NEW YORK STATE HISTORICAL ASSOCIATION, COOPERSTOWN, NY.

The Cardiff Giant was the brainchild of George Hull, a cigar manufacturer (and general rogue) from Binghamton, New York. He quarried a large block of gypsum from Fort Dodge, Iowa, and shipped it to Chicago, where two marble cutters fashioned the rough likeness of a naked man. Hull made some crude and minimal attempts to give his statue an aged appearance. He chipped off the carved hair and beard because experts told him that such items would not petrify. He drove darning needles into a wooden block and hammered the statue, hoping to simulate skin pores. Finally, he dumped a gallon of sulfuric acid all over his creation to simulate extended erosion. Hull then shipped his giant in a large box back to Cardiff.

Hull, as an accomplished rogue, sensed that his story could not hold for long and, in that venerable and alliterative motto, got out while the getting was good. He sold a three-quarter interest in the Cardiff Giant to a consortium of highly respectable businessmen, including two former mayors of Syracuse. These men raised the statue from its original pit on November 5 and carted it off to Syracuse for display.

The hoax held on for a few more weeks, and Cardiff Giant fever swept the land. Debate raged in newspapers and broadsheets between those who viewed the giant as a petrified fossil and those who regarded it as a statue wrought by an unknown and wondrous prehistoric race. But Hull had left too many tracks—at the gypsum quarries in Fort Dodge, at the carver’s studio in Chicago, along the roadways to Cardiff (several people remembered seeing an awfully large box passing by on a cart). By December, Hull was ready to recant, but held his tongue a while longer. Three months later, the two Chicago sculptors came forward, and the Cardiff Giant’s brief rendezvous with fame and fortune ended.

The common analogy of the Cardiff Giant with Piltdown Man works only to a point (both were frauds passed off as human fossils) and fails in one crucial respect. Piltdown was cleverly wrought and fooled professionals for forty years, while the Cardiff Giant was preposterous from the start. How could a man turn to solid gypsum, while preserving all his soft anatomy, from cheeks to toes to penis? Geologists and paleontologists never accepted Hull’s statue. O. C. Marsh, later to achieve great fame as a discoverer of dinosaurs, echoed a professional consensus in his unambiguous pronouncement: “It is of very recent origin and a decided humbug.”

Why, then, was the Cardiff Giant so popular, inspiring a wave of interest and discussion as high as any tide in the affairs of men during its short time in the sun? If the fraud had been well executed, we might attribute this great concern to the dexterity of the hoaxers (just as we grant grudging attention to a few of the most accomplished art fakers for their skills as copyists). But since the Cardiff Giant was so crudely done, we can only attribute its fame to the deep issue, the raw nerve, touched by the subject of its fakery—human origins. Link an absurd concoction to a noble and mysterious subject and you may prevail, at least for a while. My opening reference to P.T. Barnum was not meant sarcastically; he was one of the great practical psychologists of the nineteenth century—and his motto applies with special force to the Cardiff Giant: “No humbug is great without truth at bottom.” (Barnum made a copy of the Cardiff Giant and exhibited it in New York City. His mastery of hype and publicity assured that his model far outdrew the “real” fake when the original went on display at a rival establishment in the same city.)

For some reason (to be explored, but not resolved, in this essay), we are powerfully drawn to the subject of beginnings. We yearn to know about origins, and we readily construct myths when we do not have data (or we suppress data in favor of legend when a truth strikes us as too commonplace). The hankering after an origin myth has always been especially strong for the closest subject of all—the human race. But we extend the same psychic need to our accomplishments and institutions—and we have origin myths and stories for the beginning of hunting, of language, of art, of kindness, of war, of boxing, bow ties, and brassieres. Most of us know that the Great Seal of the United States pictures an eagle holding a ribbon reading e pluribus unum. Fewer would recognize the motto on the other side (check it out on the back of a dollar bill): annuit coeptis—“he smiles on our beginnings.”

Cooperstown may house the Cardiff Giant, but the fame of this small village in central New York does not rest upon its celebrated namesake, author James Fenimore, or its lovely Lake Otsego or the Farmer’s Museum. Cooperstown is “on the map” by virtue of a different origin myth—one more parochial but no less powerful for many Americans than the tales of human beginnings that gave life to the Cardiff Giant. Cooperstown is the sacred founding place in the official myth about the origin of baseball.

Origin myths, since they are so powerful, can engender enormous practical problems. Abner Doubleday, as we shall soon see, most emphatically did not invent baseball at Cooperstown in 1839 as the official tale proclaims; in fact, no one invented baseball at any moment or in any spot. Nonetheless, this creation myth made Cooperstown the official home of baseball, and the Hall of Fame, with its associated museum and library, set its roots in this small village, inconveniently located near nothing in the way of airports or accommodations. We all revel in bucolic imagery on the field of dreams, but what a hassle when tens of thousands line the roads, restaurants, and Port-a-potties during the annual Hall of Fame weekend, when new members are enshrined and two major league teams arrive to play an exhibition game at Abner Doubleday Field, a sweet little 10,000-seater in the middle of town. Put your compass point at Cooperstown, make your radius at Albany—and you’d better reserve a year in advance if you want any accommodation within the enormous resulting circle.

After a lifetime of curiosity, I finally got the opportunity to witness this annual version of forty students in a telephone booth or twenty circus clowns in a Volkswagen. Since Yaz (former Boston star Carl Yastrzemski to the uninitiated) was slated to receive baseball’s Nobel in 1989, and his old team was playing in the Hall of Fame game, and since I’m a transplanted Bostonian (although still a New Yorker and not-so-secret Yankee fan at heart), Tom Heitz, chief of the wonderful baseball library at the Hall of Fame, kindly invited me to join the sardines in this most lovely of all cans.





A.G. Spalding, promoter of the Doubleday creation myth. NATIONAL BASEBALL LIBRARY, COOPERSTOWN, NY.

The silliest and most tendentious of baseball writing tries to wrest profundity from the spectacle of grown men hitting a ball with a stick by suggesting linkages between the sport and deep issues of morality, parenthood, history, lost innocence, gentleness, and so on, seemingly ad infinitum. (The effort reeks of silliness because baseball is profound all by itself and needs no excuses; people who don’t know this are not fans and are therefore unreachable anyway.) When people ask me how baseball imitates life, I can only respond with what the more genteel newspapers used to call a “barnyard epithet,” but now, with growing bravery, usually render as “bullbleep.” Nonetheless, baseball is a major item of our culture, and the sport does have a long and interesting history. Any item or institution with these two properties must generate a set of myths and stories (perhaps even some truths) about beginnings. And the subject of beginnings is the bread and butter of these essays on evolution in the broadest sense. I shall make no woolly analogies between baseball and life; this is an essay on the origins of baseball, with some musings on why beginnings of all sorts hold such fascination for us. (I thank Tom Heitz not only for the invitation to Cooperstown at its yearly acme but also for drawing the contrast between creation and evolution stories of baseball, and for supplying much useful information from his unparalleled storehouse.)

Stories about beginnings come in only two basic modes. An entity either has an explicit point of origin, a specific time and place of creation, or else it evolves and has no definable moment of entry into the world. Baseball provides an interesting example of this contrast because we know the answer and can judge received wisdom by the two chief criteria, often opposed, of external fact and internal hope. Baseball evolved from a plethora of previous stick-and-ball games. It has no true Cooperstown and no Doubleday. Yet we seem to prefer the alternative model of origin by a moment of creation—for then we can have heroes and sacred places. By contrasting the myth of Cooperstown with the fact of evolution, we can learn something about our cultural practices and their frequent disrespect for truth.

The official story about the beginning of baseball is a creation myth, and a review of the reasons and circumstances of its fabrication may give us insight into the cultural appeal of stories in this mode. A. G. Spalding, baseball’s first great pitcher during his early career, later founded the sporting goods company that still bears his name and became one of the great commercial moguls of America’s gilded age. As publisher of the annual Spalding’s Official Base Ball Guide, he held maximal power in shaping both public and institutional opinion on all facets of baseball and its history. As the sport grew in popularity, and the pattern of two stable major leagues coalesced early in our century, Spalding and others felt the need for clarification (or merely for codification) of opinion on the hitherto unrecorded origin of an activity that truly merited its common designation as America’s “national pastime.”

In 1907, Spalding set up a blue ribbon committee to investigate and resolve the origin of baseball. The committee, chaired by A. G. Mills and including several prominent businessmen and two senators who had also served as presidents of the National League, took much testimony but found no smoking gun. Then, in July 1907, Spalding himself transmitted to the committee a letter from an Abner Graves, then a mining engineer in Denver, who reported that Abner Doubleday had, in 1839, interrupted a marbles game behind the tailor’s shop in Cooperstown, New York, to draw a diagram of a baseball field, explain the rules of the game, and designate the activity by its modern name of “base ball” (then spelled as two words).





Abner Doubleday, who fired the first Union volley at Fort Sumter, but who, in the words of one historian, didn’t know a baseball from a kumquat. NATIONAL BASEBALL LIBRARY, COOPERSTOWN, NY.

Such “evidence” scarcely inspired universal confidence, but the commission came up with nothing better—and the Doubleday myth, as we shall soon see, was eminently functional. Therefore, in 1908, the Mills Commission reported its two chief findings: first, “that base ball had its origins in the United States” and second, “that the first scheme for playing it, according to the best evidence available to date, was devised by Abner Doubleday, at Cooperstown, New York, in 1839.” This “best evidence” consisted only of “a circumstantial statement by a reputable gentleman”—namely Grave’s testimony as reported by Spalding himself.





Henry Chadwick, who knew that baseball had evolved from English stick-and-ball games. NATIONAL BASEBALL LIBRARY, COOPERSTOWN, NY.

When cited evidence is so laughably insufficient, one must seek motivations other than concern for truth. The key to underlying reasons stands in the first conclusion of Mills’s committee: Hoopla and patriotism (cardboard version) decreed that a national pastime must have an indigenous origin. The idea that baseball had evolved from a wide variety of English stick-and-ball games—although true—did not suit the mythology of a phenomenon that had become so quintessentially American. In fact, Spalding had long been arguing, in an amiable fashion, with Henry Chadwick, another pioneer and entrepreneur of baseball’s early years. Chadwick, born in England, had insisted for years that baseball had developed from the British stick-and-ball game called rounders; Spalding had vociferously advocated a purely American origin, citing the colonial game of “one old cat” as a distant precursor, but holding that baseball itself represented something so new and advanced that a pinpoint of origin—a creation myth—must be sought.

Chadwick considered the matter of no particular importance, arguing (with eminent justice) that an English origin did not “detract one iota from the merit of its now being unquestionably a thoroughly American field sport, and a game too, which is fully adapted to the American character.” (I must say that I have grown quite fond of Mr. Chadwick, who certainly understood evolutionary change and its chief principle that historical origin need not match contemporary function.) Chadwick also viewed the committee’s whitewash as a victory for his side. He labeled the Mills report as “a masterful piece of special pleading which lets my dear old friend Albert [Spalding] escape a bad defeat. The whole matter was a joke between Albert and myself.”

We may accept the psychic need for an indigenous creation myth, but why Abner Doubleday, a man with no recorded tie to the game and who, in the words of Donald Honig, probably “didn’t know a baseball from a kumquat”? I had wondered about this for years, but only ran into the answer serendipitously during a visit to Fort Sumter in the harbor of Charleston, South Carolina. There, an exhibit on the first skirmish of the Civil War points out that Abner Doubleday, as captain of the Union artillery, had personally sighted and given orders for firing the first responsive volley following the initial Confederate attack on the fort. Doubleday later commanded divisions at Antietam and Fredericksburg, became at least a minor hero at Gettysburg, and retired as a brevet major general. In fact, A. G. Mills, head of the commission, had served as part of an honor guard when Doubleday’s body lay in state in New York City, following his death in 1893.

If you have to have an American hero, could anyone be better than the man who fired the first shot (in defense) of the Civil War? Needless to say, this point was not lost on the members of Mills’s committee. Spalding, never one to mince words, wrote to the committee when submitting Graves’s dubious testimony: “It certainly appeals to an American pride to have had the great national game of base ball created and named by a Major General in the United States Army.” Mills then concluded in his report: “Perhaps in the years to come, in view of the hundreds of thousands of people who are devoted to baseball, and the millions who will be, Abner Doubleday’s fame will rest evenly, if not quite as much, upon the fact that he was its inventor…as upon his brilliant and distinguished career as an officer in the Federal Army.”

And so, spurred by a patently false creation myth, the Hall of Fame stands in the most incongruous and inappropriate locale of a charming little town in central New York. Incongruous and inappropriate, but somehow wonderful. Who needs another museum in the cultural maelstroms (and summer doldrums) of New York, Boston, or Washington? Why not a major museum in a beautiful and bucolic setting? And what could be more fitting than the spatial conjunction of two great American origin myths—the Cardiff Giant and the Doubleday Fable? Thus, I too am quite content to treat the myth gently, while honesty requires ’fessing up. The exhibit on Doubleday in the Hall of Fame Museum sets just the right tone in its caption: “In the hearts of those who love baseball, he is remembered as the lad in the pasture where the game was invented. Only cynics would need to know more.” Only in the hearts; not in the minds.

Baseball evolved. Since the evidence is so clear (as epitomized below), we must ask why these facts have been so little appreciated for so long, and why a creation myth like the Doubleday story ever gained a foothold. Two major reasons have conspired: first, the positive block of our attraction to creation stories; second, the negative impediment of unfamiliar sources outside the usual purview of historians. English stick-and-ball games of the nineteenth century can be roughly classified into two categories along social lines. The upper and educated classes played cricket, and the history of this sport is copiously documented because literati write about their own interests and because the activities of men in power are well recorded (and constitute virtually all of history, in the schoolboy version). But the ordinary pastimes of rural and urban working people can be well nigh invisible in conventional sources of explicit commentary. Working people played a different kind of stick-and-ball game, existing in various forms and designated by many names, including “rounders” in western England, “feeder” in London, and “base ball” in southern England. For a large number of reasons, forming the essential difference between cricket and baseball, cricket matches can last up to several days (a batsman, for example, need not run after he hits the ball and need not expose himself to the possibility of being put out every time he makes contact). The leisure time of working people does not come in such generous gobs, and the lower-class stick-and-ball games could not run more than a few hours.

Several years ago, at the Victoria and Albert Museum in London, I learned an important lesson from an excellent exhibit on late nineteenth century history of the British music hall. This is my favorite period (Darwin’s century, after all), and I consider myself tolerably well informed on cultural trends of the time. I can sing any line from any of the Gilbert and Sullivan operas (a largely middle-class entertainment), and I know the general drift of high cultural interests in literature and music. But the music hall provided a whole world of entertainment for millions, a realm with its heroes, its stars, its top-forty songs, its gaudy theaters—and I knew nothing, absolutely nothing, about this world. I felt chagrined, but my ignorance had an explanation beyond personal insensitivity (and the exhibit had been mounted explicitly to counteract the selective invisibility of certain important trends in history). The music hall was a chief entertainment of Victorian working classes, and the history of working people is often invisible in conventional written sources. This history must be rescued and reconstituted from different sorts of data; in this case, from posters, playbills, theater accounts, persistence of some songs in the oral tradition (most were never published as sheet music), recollections of old-timers who knew the person who knew the person….

The early history of baseball—the stick-and-ball game of working people—presents the same problem of conventional invisibility, and the same promise of rescue by exploration of unusual sources. Work continues and intensifies as the history of sport becomes more and more academically respectable, but the broad outlines (and much fascinating detail) are now well established. As the upper classes played a codified and well-documented cricket, working people played a largely unrecorded and much more diversified set of stick-and-ball games ancestral to baseball. Many sources, including primers and boys’ manuals, depict games recognizable as precursors to baseball well into the eighteenth century. Occasional references even spill over into high culture. In Northanger Abbey, written in 1798 or 1799, Jane Austen remarks: “It was not very wonderful that Catherine…should prefer cricket, base ball, riding on horseback, and running about the country, at the age of fourteen, to books.” As this quotation illustrates, the name of the game is no more Doubleday’s than the form of play.

These ancestral styles of baseball came to America with early settlers and were clearly well established by colonial times. But they were driven ever further underground by Puritan proscriptions of sport for adults. They survived largely as children’s games and suffered the double invisibility of location among the poor and the young. But two major reasons brought these games into wider repute and led to a codification of standard forms quite close to modern baseball between the 1820s and the 1850s. First, a set of social reasons, from the decline of Puritanism to increased concern about health and hygiene in crowded cities, made sport an acceptable activity for adults. Second, middle-class and professional people began to take up these early forms of baseball, and this upward social drift inspired teams, leagues, written rules, uniforms, stadiums, guidebooks: in short, all the paraphernalia of conventional history.





A.J. Cartwright, a most interesting point in the continuum of baseball’s evolution. NATIONAL BASEBALL LIBRARY, COOPERSTOWN, NY.

I am not arguing that these early games could be called baseball with a few trivial differences (evolution means substantial change, after all), but only that they stand in a complex lineage, better designated a nexus, from which modern baseball emerged, eventually in a codified and canonical form. In those days before instant communication, every region had its own version, just as every set of outdoor steps in New York City generated a different form of stoopball in my youth, without threatening the basic identity of the game. These games, most commonly called town ball, differed from modern baseball in substantial ways. In the Massachusetts Game, a codification of the late 1850s drawn up by ball players in New England towns, lour bases and three strikes identify the genus, but many specifics are strange by modern standards. The bases were made of wooden stakes projecting four feet from the ground. The batter (called the striker) stood between first and fourth base. Sides changed after a single out. One hundred runs (called tallies), not higher score after a specified number of innings, spelled victory. The field contained no foul lines, and balls hit in any direction were in play. Most important, runners were not tagged out, but rather dismissed by “plugging,” that is, being hit with a thrown ball while running between bases. Consequently, since baseball has never been a game for masochists, balls were soft—little more than rags stuffed into leather covers—and could not be hit far. (Tom Heitz has put together a team of Cooperstown worthies to re-create town ball for interested parties and prospective opponents. Since few other groups are well schooled in this lost art, Tom’s team hasn’t been defeated in ages, if ever. “We are the New York Yankees of town ball,” he told me. His team is called, quite appropriately in general but especially for this essay, the Cardiff Giants.)

Evolution is continual change, but not insensibly gradual transition; in any continuum, some points are always more interesting than others. The conventional nomination for most salient point in this particular continuum goes to Alexander Joy Cartwright, leader of a New York team that started to play in Lower Manhattan, eventually rented some changing rooms and a field in Hoboken (just a quick ferry ride across the Hudson), and finally drew up a set of rules in 1845, later known as the New York Game. Cartwright’s version of town ball is much closer to modern baseball, and many clubs followed his rules—for standardization became ever more vital as the popularity of early baseball grew and opportunity for play between regions increased. In particular, Cartwright introduced two key innovations that shaped the disparate forms of town ball into a semblance of modern baseball. First, he eliminated plugging and introduced tagging in the modern sense; the ball could now be made harder, and hitting for distance became an option. Second, he introduced foul lines, again in the modern sense, as his batter stood at a home plate and had to hit the ball within lines defined from home through first and third bases. The game could now become a spectator sport because areas close to the field but out of action could, for the first time, be set aside for onlookers.

The New York Game may be the highlight of a continuum, but it provides no origin myth for baseball. Cartwright’s rules were followed in various forms of town ball. His New York Game still included many curiosities by modern standards (twenty-one runs, called aces, won the game, and balls caught on one bounce were outs). Moreover, our modern version is an amalgam of the New York Game plus other town-ball traditions, not Cartwright’s baby grown up by itself. Several features of the Massachusetts Game entered the modern version in preference to Cartwright’s rules. Balls had to be caught on the fly in Boston, and pitchers threw overhand, not underhand as in the New York Game (and in professional baseball until the 1880s).

Scientists often lament that so few people understand Darwin and the principles of biological evolution. But the problem goes deeper. Too few people are comfortable with evolutionary modes of explanation in any form. I do not know why we tend to think so fuzzily in this area, but one reason must reside in our social and psychic attraction to creation myths in preference to evolutionary stories—for creation myths, as noted before, identify heroes and sacred places, while evolutionary stories provide no palpable, particular object as a symbol for reverence, worship, or patriotism. Still, we must remember—and an intellectual’s most persistent and nagging responsibility lies in making this simple point over and over again, however noxious and bothersome we render ourselves thereby—that truth and desire, fact and comfort, have no necessary, or even preferred, correlation (so rejoice when they do coincide).

To state the most obvious example in our current political turmoil: Human growth is a continuum, and no creation myth can define an instant for the origin of an individual life. Attempts by anti-abortionists to designate the moment of fertilization as the beginning of personhood make no sense in scientific terms (and also violate a long history of social definitions that traditionally focused on the quickening, or detected movement, of the fetus in the womb). I will admit—indeed, I emphasized as a key argument of this essay—that not all points on a continuum are equal. Fertilization is a more interesting moment than most, but it no more provides a clean definition of origin than the most intriguing moment of baseball’s continuum—Cartwright’s codification of the New York Game—defines the beginning of our national pastime. Baseball evolved and people grow; both are continua without definable points of origin. Probe too far back and you reach absurdity, for you will see Nolan Ryan on the hill when the first ape hit a bird with a stone, or you will define both masturbation and menstruation as murder—and who will then cast the first stone? Look for something in the middle, and you find nothing but continuity—always a meaningful “before,” and always a more modern “alter.” (Please note that I am not stating an opinion on the vexatious question of abortion—an ethical issue that can only be decided in ethical terms. I only point out that one side has rooted its case in an argument from science that is not only entirely irrelevant to the proper realm of resolution but also happens to be flat-out false in trying to devise a creation myth within a continuum.)

And besides, why do we prefer creation myths to evolutionary stories? I find all the usual reasons hollow. Yes, heroes and shrines are all very well, but is there not grandeur in the sweep of continuity? Shall we revel in a story for all humanity that may include the sacred ball courts of the Aztecs, and perhaps, for all we know, a group of Homo erectus hitting rocks or skulls with a stick or a femur? Or shall we halt beside the mythical Abner Doubleday, standing behind the tailor’s shop in Cooperstown, and say “behold the man”—thereby violating truth and, perhaps even worse, extinguishing both thought and wonder?





4 | The Panda’s Thumb of Technology




THE BRIEF STORY of Jephthah and his daughter (Judg. 11:30–40) is, to my mind and heart, the saddest of all biblical tragedies. Jephthah makes an intemperate vow, yet all must abide by its consequences. He promises that if God grant him victory in a forthcoming battle, he will sacrifice by fire the first living thing that passes through his gate to greet him upon his return. Expecting (I suppose) a dog or a goat, he returns victorious to find his daughter, and only child, waiting to meet him “with timbrels and with dances.”

Handel’s last oratorio, Jephtha, treats this tale with great power (although his librettist couldn’t bear the weight of the original and gave the story a happy ending, with angelic intervention to spare Jephthah’s daughter at the price of her lifelong chastity). At the end of Part 2, while all still think that the terrible vow must be fulfilled, the chorus sings one of Handel’s wonderful “philosophical” choruses. It begins with a frank account of the tragic circumstance:





How dark, O Lord, are thy decrees!…

No certain bliss, no solid peace,

We mortals know on earth below.



Yet the last two lines, in a curious about-face, proclaim (with magnificent musical solidity as well):





Yet on this maxim still obey:

WHATEVER IS, IS RIGHT



This odd reversal, from frank acknowledgment to unreasonable acceptance, reflects one of the greatest biases (“hopes” I like to call them) that human thought imposes upon a world indifferent to our suffering. Humans are pattern-seeking animals. We must find cause and meaning in all events (quite apart from the probable reality that the universe both doesn’t care much about us and often operates in a random manner). I call this bias “adaptationism”—the notion that everything must fit, must have a purpose, and in the strongest version, must be for the best.

The final line of Handel’s chorus is, of course, a quote from Alexander Pope, the last statement of the first epistle of his Essay on Man, published twenty years before Handel’s oratorio. Pope’s text contains (in heroic couplets to boot) the most striking paean I know to the bias of adaptationism. In my favorite lines, Pope chastises those people who may be unsatisfied with the senses that nature bestowed upon us. We may wish for more acute vision, hearing, or smell, but consider the consequences.





If nature thunder’d in his op’ning ears

And stunn’d him with the music of the spheres

How would he wish that Heav’n had left him still

The whisp’ring zephyr, and the purling rill!



And my favorite couplet, on olfaction:





Or, quick effluvia darting thro’ the brain,

Die of a rose in aromatic pain.



What we have is best for us—whatever is, is right.

By 1859, most educated people were prepared to accept evolution as the reason behind similarities and differences among organisms—thus accounting for Darwin’s rapid conquest of the intellectual world. But they were decidedly not ready to acknowledge the radical implications of Darwin’s proposed mechanism of change, natural selection, thus explaining the brouhaha that the Origin of Species provoked—and still elicits (at least before our courts and school boards).

Darwin’s world is full of “terrible truths,” two in particular. First, when things do fit and make sense (good design of organisms, harmony of ecosystems), they did not arise because the laws of nature entail such order as a primary effect. They are, rather, only epiphenomena, side consequences of the basic causal process at work in natural populations—the purely “selfish” struggle among organisms for personal reproductive success. Second, the complex and curious pathways of history guarantee that most organisms and ecosystems cannot be designed optimally. Indeed, to make an even stronger statement, imperfections are the primary proofs that evolution has occurred, since optimal designs erase all signposts of history.

This principle of imperfection has been a major theme of my essays for several years. I call it the panda principle to honor my favorite example, the panda’s false thumb. Pandas are the herbivorous descendants of carnivorous bears. Their true anatomical thumbs were, long ago during ancestral days of meat eating, irrevocably committed to the limited motion appropriate for this mode of life and universally evolved by mammalian Carnivora. When adaptation to a diet of bamboo required more flexibility in manipulation, pandas could not redesign their thumbs but had to make do with a makeshift substitute—an enlarged radial sesamoid bone of the wrist, the panda’s false thumb. The sesamoid thumb is a clumsy, suboptimal structure, but it works. Pathways of history (commitment of the true thumb to other roles during an irreversible past) impose such jury-rigged solutions upon all creatures. History inheres in the imperfections of living organisms—and thus we know that modern creatures had a different past, converted by evolution to their current state.

We can accept this argument for organisms (we know, after all, about our own appendixes and aching backs). But is the panda principle more pervasive? Is it a general statement about all historical systems? Will it apply, for example, to the products of technology? We might deem this principle irrelevant to the manufactured objects of human ingenuity—and for good reason. After all, constraints of genealogy do not apply to steel, glass, and plastic. The panda cannot shuck its digits (and can only build its future upon an inherited ground plan), but we can abandon gas lamps for electricity and horse carriages for motor cars. Consider, for example, the difference between organic architecture and human buildings. Complex organic structures cannot be reevolved following their loss; no snake will redevelop front legs. But the apostles of post-modern architecture, in reaction to the sterility of so many glass-box buildings of the international style, have juggled together all the classical forms of history in a cascading effort to rediscover the virtues of ornamentation. Thus, Philip Johnson could pl