Six Books on the Environment

John Gray, Straw Dogs

Roy Scranton, Learning to Die in the Anthropocene and We’re Doomed, Now What?

Jedediah Purdy, After Nature

Edward O. Wilson, Half Earth

Adam Frank, Light of the Stars

Reviewed by Michael F. Duggan

Modern urban-industrial man is given to the raping of anything and everything natural on which he can fasten his talons.  He rapes the sea; he rapes the soil; the natural resources of the earth.  He rapes the atmosphere.  He rapes the future of his own civilization. Instead of living off of nature’s surplus, which he ought to do, he lives off its substance. He would not need to do this were he less numerous, and were he content to live a more simple life.  But he is prepared neither to reduce his numbers nor to lead a simpler and more healthful life.  So he goes on destroying his own environment, like a vast horde of locusts.  And he must be expected, persisting blindly as he does in this depraved process, to put an end to his own existence within the next century.  The years 2000 to 2050 should witness, in fact, the end of the great Western civilization.  The Chinese, more prudent and less spoiled, no less given to over-population but prepared to be more ruthless in the control of its effects, may inherent the ruins.

                        -George Kennan, diary entry, March 21, 1977

No witchcraft, no enemy had silenced the rebirth of new life in this stricken world… The people had done it themselves.

                        -Rachel Carson

We all see what’s happening, we read it in the headlines every day, but seeing isn’t believing and believing isn’t accepting.

-Roy Scranton

Among the multitude of voices on the unfolding environment crises, there are five that I have found to be particularly compelling.  These are John Gray, Jedediah Purdy, Roy Scranton, the biologist, Edward O. Wilson, and most recently the physicists, Adam Frank.  This post was originally intended to be a review of Scranton’s newest book, a collection of essays called We’re Doomed. Now What? but I have decided instead to place that review in a broader context of writing on the environment. 

I apologize ahead of time for the length and roughness—the almost complete absence of editing—of this review/essay (the endnotes remain unedited, unformatted, an incomplete).  This is a WORKING DRAFT. The introduction is more or less identical to an article of mine that ran in the Counterpunch in December 2018.   

Introduction: Climate Change and the Limits of Reason

Is it too late to avoid a global environmental catastrophe?  Does the increasingly worrisome feedback from the planet indicate that something like a chaotic tipping point is already upon us?  Facts and reason are slender reeds relative to entrenched opinions and the human capacity for self-delusion.  I suspect that neither this essay nor others on the topic are likely to change many minds.   

With atmospheric carbon dioxide at its highest levels in three to five million years with no end in its increase in sight, the warming, rising, and acidification of the world’s oceans, the destruction of habitat and the cascading collapse of species and entire ecosystems, some thoughtful people now believe we are near, at, or past a point of no return.  The question may not be whether or not we can turn things around, but rather how much time is left before a negative feedback loop from the environment as it was becomes a positive feedback loop for catastrophe.  It seems that the answer is probably a few years to a decade or two on the outside, if we are not already there.  The mild eleven-thousand year summer—the Holocene—that permitted and nurtured human civilization and allowed our numbers to grow will likely be done-in by our species in the not-too-distant future.

Humankind is a runaway project.  With a world population of more than 7.686 billion, we are a Malthusian plague species.  This is not a condemnation or indictment, nor some kind of ironic boast.  It is an observable fact.  The evidence is now overwhelming that we stand at a crossroads of history and of natural history, of nature and our own nature.  The fact that unfolding catastrophic change is literally in the air is undeniable.  But before we can devise solutions of mitigation, we have to admit that there is a problem.                

In light of the overwhelming corroboration—objective, tested and retested readings of atmospheric CO2 levels, the acidification of the oceans, the global dying-off of the world’s reefs, and the faster-than-anticipated melting of the polar and Greenland icecaps and subsequent rises in mean ocean levels—those who still argue that human-caused global climate change is not real must be regarded frankly as either stupid, cynical, irrational, ideologically deluded, willfully ignorant or distracted, pathologically stubborn, terminally greedy, or otherwise unreasonably wedded to a bad position in the face of demonstrable facts.  There are no other possibilities by which to characterize these people and, in practical terms, the difference between these overlapping categories is either nonexistent or trivial.  If this claim seems rude and in violation of The Elements of Style, then so be it.1  The time for civility and distracting “controversies” and “debates” is over, and I apologize in no way for the tone of this statement.  It benefits nobody to indulge cynical and delusional deniers as the taffrail of the Titanic lifts above the horizon.

Some commentators have equated climate deniers with those who deny the Holocaust and chattel slavery.  Although moral equations are always a tricky business, it is likely that the permanent damage humans are doing to the planet will far exceed that of the Nazis and slavers.  The question is the degree to which those of us who do not deny climate change but who contribute to it are as culpable as these odious historical categories.  Perhaps we are just the enablers—collaborators—and equivalent of those who knew of the crimes and who stood by and averted their eyes or else knowingly immersed themselves in the immediate demands and priorities of the private life.  No one except for the children, thrown unwittingly into this unfolding catastrophe, is innocent.

The debate about whether human activity has changed the global environment is over in any rational sense.  Human-caused climate change is real.  To deny this is to reveal oneself as being intellectually on the same plain as those who believe that the Earth is the flat center of the universe, or who deny that modern evolutionary theory contains greater and more accurate explanatory content than the archetypal myths of revealed religion and the teleological red herring of “Intelligent Design Theory.”  The remaining questions will be over the myriad of unknowable or partially or imperfectly knowable details of the unfolding chaos of the coming Eremocene (alternatively Anthropcene)2and the extent of what the changes and consequences will be, their severity, and whether or not they might still be reversed or mitigated, and how.  The initial question is simply whether or not it is already too late to turn things around.

We have already changed the planet’s atmospheric chemistry to a degree that is possibly irreparable.  In 2012 atmospheric CO2 levels at the North Pole exceeded 400 parts per million (up from the pre-industrial of around 290ppm).  At this writing carbon dioxide levels are around 415ppm.  This is not an opinion, but a measurable fact.  Carbon dioxide levels can be easily tested, even by people who do not believe that human activity is altering the world’s environment.  Even if the production of all human-generated carbon was stopped today, the existing surfeit will last for a hundred thousand years or more if it is not actively mitigated.3  Much of the damage therefore is already done—the conditions for catastrophic change are locked in place—and we are now just waiting for the effects to manifest as carbon levels continue to rise unabated and with minor plateaus and fluctuations.

Increases in atmospheric carbon levels have result in an acidification of the oceans.  This too is an observable and quantifiable fact.  The fact that CO2 absorption by seawater results in its acidification and the fact that atmospheric carbon dioxide traps heat more effectively and to a greater extent than oxygen are now tenets of elementary school-level science and are in no way controversial assertions.  If you do not acknowledge both of these facts, then you do not really have an opinion on global climate change or its causes. 

As it is, the “climate debate”—polemics over the reality of global climate change—is not a scientific debate at all, but one of politics and political entertainment pitting testable/measureable observations against the dumb and uninformed denials of the true believers who evoke them or else the cynics who profit from carbon generation (the latter are reminiscent of the parable of the man who is paid a small fee to hang himself).4 Some general officers of the United States military are now on the record stating that climate change constitutes the greatest existing threat to our national security.5

Some deniers reply to the facts of climate change with anecdotal observations about the weather—locally colder or snowier than usual winters in a given region are a favorite distraction—with no heed given to the bigger picture (never mind the fact that the cold or snowy winters that North America has experienced since 2010 were caused by a dip in the jet stream caused by much warmer than usual air masses in Eurasia that threw the polar vortex off of its axis and down into the lower 48 states while at times Greenland basked in 50 degree sunshine). 

An effective retort to this kind of bold obtuseness is a simple and well-known analogy: the climate is like your personality and the weather is like your mood.  Just because you are sad for a day or two does not mean that you are a clinical depressive any more than a locally cold winter set in the midst of the two hottest decades ever recorded worldwide does not represent a global cooling trend.  Some places are likely to cool off as the planet’s overall mean temperature rises (the British Isles may get colder as the Gulf Stream is pushed further south by arctic melt water).  Of course human-generated carbon is only one prong of the global environmental crisis, and a symptom of existing imbalance.  

Human beings are also killing off of our fellow species at a rate that will soon surpass the Cretaceous die-off and is the sixth great mass extinction of the Earth’s natural history.6 This is a fact that is horrifying insofar as it can be quantified at all—the numbers here are softer and more conjectural than the precise measurements of chemistry and temperature and estimates may well be on the low side.  The true number of lost species will never be known as unidentified species are driven into extinction before they can be described and catalogued by science.7  But as a general statement, the shocking loss of biodiversity and habitat is uncontroversial in the communities that study such things seriously.  Human history has shown itself to be a brief and destructive branch of natural history in which we have become the locusts or something much, much worse than such seasonal visitations and imbalances. 

As a friend of mind observed, those who persist in their fool’s paradise or obstinate cynicism for short term gain and who still deny the reality global climate change must ultimately answer two questions: 1). What evidence would you accept that human are altering the global environment?  2). What if you are wrong in your denials? 

From my own experience, I have found that neither fact-based reason nor the resulting cognitive dissonance it instills change many minds once they are firmly fixed; rationalization and denial are the twin pillars of human psychology and it is a common and unfortunate characteristic of our species to double-down on mistaken beliefs rather than admit error and address problems forthrightly.  This may be our epitaph.

And now the book reviews.

John Gray: The “Rapacious Primate” and the Era of Solitude

Straw Dogs, Thoughts on Humans and other Animals, London: Granta, 2002 (paperback 2003), 246 pages.

In the early 2000s, a friend of mine recommended to me some books by the provocative British philosopher and commentator, John Gray.  On issues of human meaning/non-meaning vis-à-vis the amorality of nature, Gray is a two-fisted polemicist from the disillusioned side of post-humanism who loves to mix things up and disabuse people of their moral fictions and illusions.  The present book is not specifically on the world environmental crises, but rather on human nature.

Straw Dogs is a rough-and-tumble polemic—Nietzsche-like in tone and format but Schopenhauer-like in its pessimism—a well-placed barrage against humanism in which the author, painting in broad strokes, characterizes his target as just another delusional faith, a secularized version of Christianity (it is therefore, not specifically about the environment, although ecological degradation figures into it prominently). Where Western religion promises eternal salvation, humanism, as characterized by Gray, asserts an equally unfounded faith in terrestrial transcendence: the myths of social progress, freedom of choice, and human exceptionality construct an artificial distinction that “unnaturally” separates humans from the rest of the living world.  Even such austere commentators as Nietzsche (and presumably the existentialists that followed)—far from being nihilists—are in Gray’s appraisal latter-day representatives of the Enlightenment, perhaps even Christianity in another guise, trying to keep the game of meaning and human exceptionality alive.                                                                                                                   

Gray begins this book with a flurry of unsettling assertions and observations.  In the preface to the paperback edition, he writes:

“Most people today think that they belong to a species that can be the master of its own destiny.  This is faith, not science.  We do not speak of a time when whales and gorillas will be masters of their destinies. Why then humans?”

In other words, he believes that it is conceit to assume that humans can take charge of their future any more than any other animal and that this assumption is based on an erroneous perception of human exceptionality by type from the rest of the natural world.  At the end of this section, he writes:

“Political action has become a surrogate for salvation; but no political project can deliver humanity from its natural condition.” 

Here then is a perspective, so conservative, so deterministic, and fatalistic about workable solutions to the bigger problems of human nature as to dismiss them outright or to even entertain them as a possibilities.  This is not to say that he is necessarily wrong.

But it is really in the first few chapters that Gray brings out the big guns in explaining that not only can we not control our fate, but that we have, through our very success as an animal, become a Juggernaut, a plague species that is inexorably laying waste to much of the living world around us.  Interestingly he does not lie this at the feet “of global capitalism, industrialization, ‘Western civilization’ or any flaw in human institutions.” Rather “[i]t is a consequence of the evolutionary success of an exceptionally rapacious primate.  Throughout all of history and pre history, human civilization has coincided with ecological destruction.”  We are damned by the undirected natural process that created and shaped our species and are now returning the favor upon nature by destroying the biosphere.  

We destroy our environment then because of what we are (presumably industrial modernity is merely an accelerant or the apex manifestation of our identity as destroyer).  We have by our very nature become the locusts, and destruction is part and parcel of who we are rather than a byproduct of a wrong turn somewhere back in our history.  Destruction and eventually self-destruction is in our blood, or more correctly, in the double helix spirals and the four-letter code of our DNA manifested in our extended phenotype.  The selfish gene and self-directed individual coupled with the altruism of group selection form a combination that will likely lead to self-destruction along with the destruction of the world as it was.

With the force of a gifted prosecutor presenting a strong case and with all of the all of the grace and subtlety of the proverbial bull in a china shop, Gray observes that we are killing off other species on a scale that will soon rival the Cretaceous die-off that wiped out the dinosaurs along with so much else of the planet’s flora and fauna 65 million years ago.  He points to early phases of human overkill and notes that most of the mega fauna of the last great ice age, animals like the wooly mammoth and rhinoceros, the cave bear, and saber tooth cats, North American camels, horses, lions, mastodons (about 75% of all the large animals of North America), and almost every large South American animal—not-so-long gone creatures that are sometimes anachronistically lumped together with the dinosaurs as pre-human—were likely early casualties of modern human beings and their kin (there was a vestigial population of wooly mammoths living on Wrangel Island until less than 4,000 years ago, or about 1,000 years after the Pyramids of Giza were built).8  Quoting James Lovelock, Gray likens humans to a pathogen, a disease, a tumor, and indeed there is a literal resemblance between the light patterns of human settlement as seen from space and naturalistic patterns of metastasizing cancer.

Gray concedes “that a few traditional peoples love or lived in balance with the Earth for long periods,” that “the Inuit and Bushman stumbled into way of life in which their footprints were slight.  We cannot tread the Earth so lightly. Homo rapines has become to numerous.”  He continues,

“[a] human population of approaching 8 billion can only be maintained by desolating the Earth.  If wild habitat is given over to cultivation and habitation, if rain forests can be turned into a green desert, if genetic engineering enables ever-higher yields to be extorted from the thinning soils—then human will have created for themselves a new geological          era, the Eremozoic, the Era of Solitude, in which little remains on the Earth but    themselves and the prosthetic environment that keeps them alive.”

According to Gray then, wherever humans live on a modern scale (or any scale above the most benign of hunter-gatherers) there will be ecological degradation—that there is no way to have recognizable civilization without inflicting harm to the environment.  Similarly “green” politics and “sustainable” energy initiatives are also pleasant but misleading fictions—self-administered opiates and busy work to assuage progressives and Pollyannas beset with guilty consciences.  To Gray environmentalism is the sum of delusions masquerading as real solutions and high-mindedness.  It is difficult to tell whether or not he really believes all of what he us saying or if he is just trying to provide a much-needed shaking up of things by making the truth more clear than it really is.  Regardless, his position seems to be a development of the grousing adage that given time and opportunity, people will screw up everything.

Gray’s dystopian future of a global human monoculture, his “green desert” or Eremozoic (“era of solitude”9)  finds parallel expression in what others have called the Anthropocene, or the geological period characterized by the domination of human beings.  Adherents to this concept span a wide range from the very dark to the modestly optimistic to the insufferably arrogant to the insufferably idealistic.

Regardless of which term we use, Gray doesn’t think that things will ever get that far.  Sounding as if he is himself were beginning to embrace a historical narrative or metaphysic of his own, he writes that past a certain point, nature (understood as the Earth’s biosphere) will start to push back.  The idea is that the world human population will collapse to sustainable levels, just like an out-of-control worldwide plague of mice, lemmings, or locusts.  Like all plagues, human civilization embodies an imbalance in an otherwise more or less stable equilibrium and is therefore by its nature fundamentally unsustainable and eventually doomed (almost 20 years ago, with a population of about six billion, the human biomass was estimated to be more than 100 times greater than that of any land animal that ever lived10).

There is of course an amoral “bigger picture” implication to all of this—a view of the natural world that, like nature itself, is beyond good and evil—which recognizes that sometimes large changes in natural history resulting from both gradual change and catastrophe have in turn resulted in an entirely new phase of life rather than a return to something approximating the previous state of balance.  This would include the rise of photosynthesizing/carbon-trapping/oxygen-producing plants took over the world, fundamentally changing the atmospheric chemistry from what had existed before and therefore the course of life that followed.11 More on this in the discussion below on Adam Frank’s Light of the Stars..

Gray’s thesis appears to have elements of a Malthusian perspective and the Gaia hypothesis of James Lovelock and Lynn Margulis.  It is unclear how Gray can be so certain of the inevitability of such dire outcomes—that humans lack any kind of moderation and control and that nature will necessarily push back (could humankind embracing a greater degree of self-control be the Gaia balancing mechanism?).  Such certainty seems to go beyond a simple extrapolation of numbers and the subsequent acknowledgment of likely outcomes, into an actual deterministic historical narrative—an untestable metaphysical assertion and therefore a myth along the lines of what he takes to task in this book and the sort of eschatology, which he criticizes in his excellent 2007 book Black Mass.  My sense is that Gray will likely be right; my objection is that he indulges in non-scientific beliefs, something of which he accuse others.

As a theory then, I believe that the flaw in Gray’s thesis lays in its Gaia counter-mythology and deterministic inevitability, its necessity, its fatalism, when in fact we do not know whether the universe (or the biosphere as a subset) is deterministic or indeterministic.  We may very well kill off much of the natural world and ourselves with it, but this may have less to do with evolutionary programming or biological determinism than with inaction or bad or ineffective decisions in regard to the unprecedented problems that face us.  I also realize that if we fail, this will be the ultimate moot point in all of human history. 

The Gaia hypothesis may turn out to be a true theory—perhaps nature will protect itself like a creature’s immune system by eradicating a majority of what William C. Bullitt called “a skin disease of the earth.”12  The problem is that this theory—really an organon or meta-theory—purports describe a phenomena that can not be tested (although the extinction or near-extinction of humankind would certainly corroborate it).  It is therefore not a scientific theory.  This in no way means that it is necessarily a worthless idea or untrue.  It is simply not science, and as with the humanist belief in human exceptionality, it is taken on faith.13

Let me clarify the previous paragraph: if the Gaia hypothesis maintains that the Earth’s biosphere is self-regulating (e.g. maintaining atmospheric oxygen levels at a steady state in resisting the tendency in a non-living system toward a chemical equibrium), then this is a theory that can be accounted for by physics (e.g. James Lovejoy’s “Daisyworld” thought experiment) and is not teleology or metaphysics (See: Adam Frank, Light of the Stars, 129).  If we hypothesize that there are elements of the biosphere that will act like a creature’s immune system in eradicating the surplus human population, then we have likely ventured into the realm of metaphysics.

It stands to reason that as a practical matter, any successful, intelligent, willful animal that can eradicate its enemies and competitors and alter its environment (both intentionally and unintentionally) will run afoul of nature as biologist Edward O. Wilson suggests.  But is this a tenet of common sense?  Logical necessity?  Biological or physical determinism?  And as a small subset of nature, is it even possible for us to know what necessity is for nature?  Are we condemned to extinction due to a lack of ability to adapt to changes increasingly of our own making, arising from our own nature, and is our extinction is made inevitable by a surfeit of adaptability and successful reproduction (i.e. the very qualities that allowed us to succeed)?  Is balance possible in such a species?  What of balance and creatures whose numbers held in sustainable check in a steady state for tens, and in some cases hundreds of millions of years in relatively stable morphological form—the shark, crocodile, and dragonfly—who live long enough to diversify slightly or change gradually along with conditions in the environment?  What of animals who have improved their odds (cats and dogs come to mind) through intelligence and a mutually beneficial partnership and co-evolution with humankind?

Gray says that we cannot control our fate, and yet our very success and perhaps our downfall is the result of being able to control so much of our environment (the eliminating or natural enemies from animal competitors to endemic diseases, to the regulation of human activity and production to guarantee water, food, energy, etc.).  Any animal that can eliminate or neutralize the counterbalances to its own numbers will result in imbalance, and unchecked imbalance leads to tipping points14.  It is ironic that Gray lays all of this at the feet of the human species as the inevitable product of our animal nature, as the result of biological and even moral inevitability, and yet there is a tone of judgment about it all as if we are somehow to blame for who we are, for characteristics that Gray believes are endemic and unalterable. 

Gray, then, is a bleak post-humanist—an apostate conservative gone rogue—who apparently adheres to humanist values in his own life (indeed, as Camus knew, a view espousing a void of deontological values must lead either to humanism or nihilism, and nobody lives on a basis of nihilism).  In an interview given with Deborah Orr that appeared in the Independent he states that “[w]e’re not facing our problems.  We’ve got Prozac politics”—an odd claim given the inevitability of those problems and the impossibility of fixing them, an odd statement for a behavioral determinist.  Moreover, although he powerfully criticizes the proposed solutions of others, his own solutions are vague and unlikely to remedy the situation (not that that is their purpose).15  When he writes on topics outside of his areas of expertise (artificial consciousness, for instance), his ideas are not especially convincing.16

Of course in a literal biological sense Gray is right about a lot: humans are just another animal and to assert otherwise is to create an “artificial” distinction.  But even here, the demarcation between organic and artifice/synthetic (meaning the product of the human extended phenotype—a “natural category”) has to be further defined and is a useful distinction (“altered,” “manmade,” or “human-modified nature” may be a more constructive, if inelegant refinements of the “artificial” or “unnatural”).  After all are domesticated animals “natural,” are feral animals “wild” in conventional usage, and does calling everything “natural” add clarity to finer delineations?

Gray frames his discussion as an either/or dichotomy of the utopian illusion of progress versus inevitable apocalyptic collapse.  But what if the truth of the matter is not as cut-and-dried as he would have us believe?  Perhaps we cannot be masters of our fate in an ultimate sense, but can we manage existing problems and new ones as they arise even from past solutions?  Although we have in past more modest instances, here the devil lays in both the scale and details, and the details may include a series of insurmountable hobbles and obstacles. 

We may not be far off from Roy Scranton’s prescription of acknowledging defeat, and personal decisions about learning to die in a global hospice, but we are not there yet.  The chances of redeeming the situation may be one in 100 or one in 1,000, but there is still a chance.  As a glorified simian—a “super monkey” in the words of Oliver Wendell Holmes, Jr.17(the flipside of Gray’s homo rapiens)—we are audacious creatures who must take that one chance, even if it turns out to be founded on delusions.  “If not gorillas and whales,” Gray asks “why then humans?”  Because we are natural-born problem solvers; because gorillas and whales have never put one of their own on the Moon.   Why humans?  Because the New Deal, the industrial mobilization during the Second World War, the Manhattan Project, Marshall Plan, and the Apollo Moon Project are parts of the historical record and not matters of faith.

Far from seeing human civilization in terms of enlightened progress, we must come to regard it as managing ongoing damage control and the putting out of fires as they spring up and then managing spinoff problems as they emerge from previous solutions—mitigating rather than just adapting or surrendering.  It will involve an unending series of brutal choices and a complete reorientation of the human relationship with nature and whose only appeal will be that they are preferable to our own extinction and inflicting irreparable damage on the world of which we are a part.

If Gray is simply making a non-deterministic Malthusian case that, unaltered, human population growth will likely result in a catastrophic collapse, we could accept this as a plausible and perhaps even a very likely hypothesis.  If on the other hand he is saying that the Earth is itself a living/conscious or “spiritual” being and will necessarily push back against human metastasis through a sort of conscious awareness or physical law-like behavior, then he is showing susceptibility to a kind of historical narrative of his own.

What then is the practical distinction between deterministic inevitability of Gray’s (Lovelock/Margulis’s) Gaia model and the practical inevitability of a Malthusian model (Although Malthus himself hits at something very much like the Gaia thesis: he refers to famine as “the most dreadful resource of nature… The vices of mankind are active and able ministers of depopulation.  They are the precursors in the great army of destruction, and often finish the dreadful work themselves.  But should they fail in this war of extermination, sickly seasons, epidemic, pestilence, and plague, advance in terrible array, and sweep off their thousands and tens of thousands.  Should success still be incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow, levels the population with the food of the world” [Malthus, p. 61)?  The answer is that the later is inevitable only if conditions leading toward a collapse remain unaltered, and therefore allows for the possibility of a workable solution where the inevitable model does not.  As that greatest of Malthusian antagonists-turned-protagonist from English literature, Ebenezer Scrooge, in all of his Dickensian wordiness duns the Ghost of Christmas Present: 

“Spirit, answer me one question: are these the shadows of things that will be or the shadows of things that may be only?  Men’s actions determine certain ends if they persist in them.  But if their actions change, the ends change too.  Say it is so with what you show me… Why show me this if I am past all hope?”18 

In the words of another English writer also given to overwriting, “aye, there’s the rub.”  Perhaps it is not too late for humankind to change its ways, to regard writers on the environment to be latter day analogs of the ghosts of Christmas Present and Future.  It should be noted that under Malthus, there are survivors once the excess is eliminated.19                                                                                                                  

If Gray is right, some have argued that we might as well keep on polluting and degrading the environment, given that destruction flows from unalterable human nature and therefore self-extermination is inevitable.  Tiny Tim will go to an early grave no matter what changes and accommodations Scrooge makes in a closed universe.20  As Gray himself writes, “[p]olitical action has come to be a surrogate for salvation; but no political project can deliver humanity from its natural condition.”  Bah Humbug.

Of course whether the impending collapse of world civilization is deterministically certain or only merely certain in a practical or probabilistic sense is ultimately irrelevant, given that either way it will likely come to pass.  The question here is whether we will catastrophically implode as just another plague species, of if we are able to manage a controlled decline in population to sustainable steady state (and do the same with carbon even earlier).  It is the difference between an uncontrolled world of our own making and one in which we shape events piecemeal toward suitable incremental goals toward reaching a steady state.  It is the difference between a slight chance and no chance at all. 

Although I am not sold on the idea that biology is destiny—even though we can never untether ourselves from nature our or own nature, we can perhaps rise above our brute character with moderation and reason—I do agree that past a certain point, if we kill of the natural world, we will have killed ourselves in the process.  There will never be a human “post-natural” world.

One could argue that the audacity, hubris, and capacity for innovation that allowed us to take over the world are value-neutral qualities that could be reoriented toward curbing our own success.  One wonders what value Gray credits to human consciousness and of human ideas other than an admission that science and technology (notably medical and dental) progresses.  One senses that he sees our species as not worthy rather than as tragic.  

Darwinian success may lead to Malthusian catastrophe just as a human apocalypse could mean salvation for the rest of the living world. The over-success of the human species is the result of natural drives to survive, to improve our situation, and eliminate the competition (as well as an excellent blueprint—our genes—and out nature which is divided between the individual and the group.  See E.O. Wilson The Meaning of Human Existence).  More specifically, it is these powerful tools served us so well in making us the biological success we have become—and that survival is the conscious or unconscious goal of animals—then it is an artificial distinction to claim that we could not curtail this success with the same tools.

In the interest of full disclosure, I must say that I don’t share Gray’s categorical contempt for humanism or the Enlightenment.  His own ideas stand on the shoulders or in proximity to these ideas and trends or would otherwise not exist without them.  As a friend of mine observed, if we think of the natural world as a living organism (as Gray might), then, by way of analogy, human beings might be regarded as the most advanced, most conscious neurons of the brain of the creature.  The fact that we have become a runaway project does not make us bad (even if we accept Gray’s premise that humans destroy nature because of who we are, we can hardly be blamed for being who we are).  The fact that brain cells sometimes mutate into brain cancer hardly makes brain cells bad.21

One problem with writing about nature is that the living world is like a great Rorschach test into which we read or project our beliefs and philosophy al la mode into our observations and lessons drawn from it.  Emerson and Thoreau are mystics of a new-agey pantheism “as it exists in 1842.”  Malthus is a conservative economist and moralist wedged between the Enlightenment he helped to kill and the naturalism and modernity he helped usher in.  Darwin is a reluctant naturalist keenly aware of the importance of his great idea but shy of controversy and invective.  In Pilgrim at Tinker Creek, Annie Dillard is a perceptive and precociously odd woman-child who likes bugs and is endowed with a poet’s genius for the written word in reporting what she sees with such brut honesty that she overwhelms herself.22  Gray fluctuates from neo-Hobbesian realist to a Gaia fatalist, to a Schopenhauer-like pessimist.

To be fair, Straw Dogs is probably not Gray’s best book (see Black Mass, for instance).  In the end, there is something a little facile, a little shallow about the swagger, the pose he strikes here—the professional doom-and-gloomer on a soap box to frighten the fancy folk out of their smug orthodoxy.  Although there are few things more dangerous than a true believer, one comes away from Gray wondering if he believes all of his own ideas.  This is not to say that there are not powerful ideas here or that they are wrong.   

Roy Scranton and Nietzsche’s Hospice (or: How to Live and Die Well in a Dying World)

Learning to Die in the Anthropocene, City Lights Books, 2015, 142 pages.                                                 

“Well, when the fall is all that is left, it matters a great deal.”

            -From: The Lion in Winter

“Gather ye rosebuds while ye may…”

            -Robert Herrick

One of the more eloquent voices to emerge on the darkly realistic side of the Anthropocene perspective in recent years is Roy Scranton.  A literal poet warrior who has glimpsed the ruined future of humankind in the rubble and misery of Iraq, Scranton believes that it is simply too late to save the environment.  The time for redemption has passed.  Full stop. His Nietzsche-like response is one of acceptance, that as members of a mythmaking species, people should acknowledge that we are finished and learn to die with courage and dignity in the Anthropocene.

Although he might be the first to deny it, having seen a reasonable analog or foreshadow of the coming apocalypse as an enlisted combat infantryman in Iraq, Scranton has “street creds” that gives his dismissal of the pipe dreams of benevolent globalization (and perhaps any hope of a workable solution at all) a kind gravitas often missing in the writing of his colleagues in the ivory tower.  We should always take ideas on a basis of their validity realizing that experience is only relevant insofar as it informs the soundness of our judgment, views and interpretations.  In this sense, Scranton writes with a sense of firsthand authority and disillusioned realism missing in the analysis of other writers with more limited worldly experience.23  The book’s greatest strengths are the quality of writing and the author’s honesty.

In some respects, Scranton goes farther than other dark realists like Gray, asserting that things are already too far gone as a matter of fact, and that all that remains is to learn to die well with the Apollonian sense of calm and circumspect prescribed by Nietzsche.  Scranton is a noble, disillusioned bon vivant of the mind forced by circumstances and his own clear and unflinching perception into fatalistic stoicism.  Unlike Gray he embraces the myths or rather the mythmaking that mark us as human.  He also doe not put himself above the human condition with all of its warts.               

In his elegant, if grimly poetic little book—essentially a long essay—Learning to Die in the Anthropocene, Scranton acknowledges the existence of the neoliberal Anthropocene, recognizing its necessarily terminal nature (in this sense, he is similar to writers like Robert D. Kaplan, who accepts neoliberal economic globalization as a fact, but has few illusions about its implications).24

Scranton is not as elemental or pugnacious a polemicist as Gray and his claim is not necessarily deterministic in character (i.e. that the looming end is the result of cosmic or genetic destiny or the natural balancing of the biosphere).  He simply observes that things are too far gone to be reversed.  For all of his insight, he does not advance grandiose theories about human nature, he just looks at the world around him—peers Nietzsche-like into the abyss—and does not blink.  Honest, sensitive, and intelligent—quite possibly a genius—he simply tells the truth as he sees it.  He accepts the inevitable and without illusion or delusion.  The time for redemption has passed, and we must learn to die with whatever gives us meaning.

As with Gray, Scranton may prove to be right as a practical matter and believes the end to be a matter of empirical fact rather than the unfolding of biological, historical, or metaphysical necessity.  It is nonetheless palliative in tone.  Scranton’s effective batting down of any and all optimistic possibilities reminded my of the story of General Grant whittling on a stick until nothing remained but a pile of shavings.   

Scranton’s book has an affinity with Camus’s novel The Plague and Cormack McCarthy’s The Road in facing questions of how to live well in a time with little or no hope.  It might also find inspiration or answers in Nietzsche’s early essay “Schopenhauer as Educator” in his underrated collection, Untimely Meditations.  Here Nietzsche makes the case for embracing those characteristics that set humans apart: the qualities and activities of the artist, saint, and philosopher.  Unlike Gray, Scranton embraces the Nietzschean idea of meaning arising from those characteristics that make us human.

 But if Scranton’s thesis finds parallel elsewhere in ethics, it is among the personal end of life issues and the idea (or ideal) of dying with dignity that we all must face if we are not taken sooner.  It is a macro version of these inevitable discussions—an intimate issue made universal and then reflected back upon the individual as a thinking being and complicit element of a dying world.

As a tenet of thoughtful maturity, it is wise to consider and even follow his prescription as individuals regardless of whether or not one believes that Scranton’s dark realism goes too far and that there is still time to mitigate or reverse the effects of global climate change, overpopulation, and loss of biodiversity.  Sensible people draw up wills and trusts.  Many of us already seek comfort in science, philosophy, history (ideas generally), art, and nature in times of personal crisis or as distractions in a time increasingly characterized by troubling news and in hours where there is little reason for hope. 

But even so, we must take care in the present moment to make sure that this premise cannot become a rationalization for permanent escape or a distraction from solutions as well as the problems that face us, lest we and the solutions fall victim to premature surrender.  The danger is that Scranton’s palliative prescription could provide a basis for terminal escapism (something that Americans seem to be perfectly capable of without his help), allowing the less thoughtful to take over without opposition. 

Regardless of how we seek to address the crises of the environment, Scranton’s Learning to Die in the Anthropocene thesis is a thing to be kept in the back of the mind in a similar sense that end of life issues should be tended to in one’s own life.  If he has only written a open letter to those able to countenance his prescription and stark acknowledgment of the end—as other prophets have done during the darker moments of world historythen it is well-conceived and useful as a personal outlook or philosophy.  Its utility is as a fallback position in the likelihood that things do not work out—as it increasingly appears they will not. 

In this sense he is more like a fifth-century Irish monk carefully preserving civilization at the edge of the world on the precipice of the possible end of civilization, than an Old Testament prophet speaking of eventual dawn after the dark of night, the calm after the tempest.  As with the early Irish monks and similar clerical scribes writing at the height of the Black Death of the fourteenth-century, we do not know whether or not we face the end of the world.25  The difference is that that Scranton believes in facing the facts with frankness and honesty, and then falling back on myths about ourselves and of the meaning of life.

One problem with this idea is that it is difficult to imagine a prescription of a global hospice as a basis for an actual policy position, unless the end was immanent and undeniable.  Even then it would not be constructive.  So long as there was any chance of hope, such an outlook would likely do more harm than good as anything but a personal prescription for those capable of aspiring it (like those found in the works of Nietzsche).  This is aggravated by the fact that our faltering republic has become an arena for vicious, high-stakes blood sports and interest politics, and one can imagine that in a time and place when it is officially announced that the end is close at hand, the most aggressive at all levels of society would simply take what remains with no regard for others or the future (similar criticisms have been made by others about Gray’s outlook).  

The world today suffers from a state of affairs in which more is taken out of the planet than it is able to replenish. It is estimated that the human population passed the point of global sustainability around 1978, and that as of 2002, human needs exceeded the world’s carrying capacity by 1.4 times.  A world population of 8 billion would require the resources of four planet Earths to sustain it.26 Similarly, more waste is returned into the environment than it can absorb.  If governments around the world announced that the end was near, what would stop people from even more selfish use of the remaining resources, and what could realistically deter them from trying? 

At a certain point, when the denial of the climate disaster becomes untenable, the deniers will pivot to a perspective of “it’s too late” while staking-out claims on the world’s depleting resources.  In this sense, the “dying with dignity” thesis would play into the hands of these people and would become as counterproductive as Gray’s thesis of biological determinism.  One does not have to be a prophet to see that result would be the opposite of a productive approach.  While it is reasonable to take both men at their word, it is also possible that they are attempting to jolt people out of complacency.  I am sure that both would welcome being proved wrong on their darkest predictions, and if this is their true motive, then I think they are on to something—the fear they inspire may well be their most important contribution.

End of life issues are by their nature among the most difficult, the most personal.  Many of us start adulthood as Dionysian in temperament—as romantics—standing against fate and defying augury with the confidence of youth.  But coming down from the peak in midlife, we become Apollonian—stoical, perhaps even Buddhist-like—as we begin to relinquish things back to the universe.  But this is a personal, partly rational, partially intuitive or pre-rational choice—one has to be ready to let go according to one’s temperament—and it seems doubtful if it could be taught whole cloth to an increasingly diverse nation much less the world.  People face their own mortality in very different ways.  Letting go requires quiet, not chaos.  Scranton’s book, like those of Nietzsche (and Hemingway, Camus, and Gray), is for individuals and is not a basis for policy.

Scranton’s diagnosis and prescription is probably not intended to dissuade people from positive action, but the danger is a practical sense is that it will dissuade those from embracing a possibility of hope.  It is likely a kind of resignation after frustration where the frustration lingers along with the subsequent calm.  To paraphrase Edna Ferber’s comparison of marriage to drowning: life in a dying world may not be an altogether unpleasant experience once one has given up struggling.

The idea Scranton embraces in Learning to Die in the Anthropocene and in his hard-hitting 2015 essay in the New York Times, “We’re Doomed. Now What?” is that we turn into ourselves as natural-born mythologizers and find meaning there.  But ultimately myths are fictional or embellished narratives of a hunter-gatherer that help bind individuals to the group—lies we tell to ourselves in order to reveal greater symbolic truths.  We are myth-makers, but we are equally natural-born problem solvers, and the danger inherent in Scranton’s view is the possibility of accentuating “man the myth-maker” over “man the result-oriented philosopher.”

His observation that “our human drive to make meaning is powerful enough to turn nihilism against itself” can be plausibly paraphrased as “self-delusion can overpower our perceptions of reality and the fact that there is no objective or deontological ‘meaning of life’” or “human self-delusion is a part of who we are and is more powerful than our glimpses of dark reality.”  Assuming he is right, why couldn’t we turn the same human focus and drive—Will, to continue with the Nietzschean theme—against that which now threatens us: our own animal nature and excess?  Perhaps the answer is to be found in a question we might ask about the protagonist in Thomas Hardy’s novel Jude the Obscure: is Jude Fawley a genuine tragic hero who might have otherwise succeeded, or just a guy who doesn’t know when he is beaten? 

Humans may be the only animals that lie to themselves—we are born myth-makers and it has served us well in a practical sense as hunter-gatherers and in later groups, whether they were communities of the faithful or the fans of sports teams.  Certainly it is a valuable characteristic as a primary font for the arts: we lie in a literal sense order to reveal deeper, if less-literal truths.  But to immerse ourselves into finding meaning in a world devoid of objective moral meaning would seem to be self-indulgent, solipsism on a grand scale.  If anything, the preoccupation with entertainment, art, trendy pseudo self-awareness, and other distractions are a part of the problem.  Why embrace Quixotic quests for meaning if it is too late?  Is such meaning anything more than an opiate to a dying patient? 

How can we myth-make—how can we lie to ourselves through art and archetype—as the world dies at our hands, and we along with it?  What is “meaning” in a dying world and how is it to be used constructively and for what end?  To date, avoidance and immersion in diversions have been parts of the problem.  Is Scranton’s prescription what people will do anyway, the last full measure of delusion?  I do not want to put words in his mouth, and others—like E.O. Wilson and Afam Frank believe that we need powerful new myths in order to save us.   

Of course even without hope there are also good reasons to act with dignity in the face of inevitable demise.  This of course is a key tenet of the Hemingway world view: that in a world without intrinsic meaning, we can still come away with something if we face our fate with courage and dignity.  Nietzsche’s prescription is even better:  if we are to live our lives in an eternal sequence of cycles, then we should attempt to conduct our lives in such way so as to make them monuments to ourselves, for eternity.  We do this by living in such away as would best reflect our noble nature.  Although modern physics has obviously cast doubt on the idea of eternal recurrence, the idea also holds up equally well in the block universe of Einstein (and Parmenides and Augustine) in which the past and future exist forever as a continuum on spite of the “stubbornly persistent illusion” of the present moment.  Our lives are our eternal monuments between brackets, even in a dying world, and although Nietzsche and Einstein were both determinists, we must act as if we have choice.  

Camus believes that in a world without deontological values, we assert our own and then try to live up to them knowing that we will fail.  A.J. Ayer inverts this with the idea that life provides its own meaning in a similar sense that our tastes choose us more than we choose them.  If Ayer is right, then perhaps we arrive back at determinism: we have no choice but to immerse ourselves in personal myths as they select us.  We have a will, but it is a part of who we are, and who we are is given.

Of course one could ask how are we to affirm what makes us distinctively human in a positive sense when that which characterizes us distinctly human as a plague species continues to strangle the biosphere?  What is meaning in a dying world, intellectual or otherwise?  Do we withdraw into our myths, our archetypes as natural-born mythmakers or has this been a part of the problem all along?  

To this I would only add what might be called “The Parable of the Dying Beetle.”  When I was a child, I came across a beetle on the sidewalk that had been partially crushed when someone stepped on it.  It was still alive but dying.  I found a berry on a nearby bush and put it in front of the beetle’s mandibles and it began to eat the fruit.  There may have been no decision—eating something sweet and at hand was presumably something the beetle did as a matter of course.  It made no difference that there was no point in a dying beetle nourishing itself any more than did my offering it the berry to begin with.  It was simply something that the beetle did.  Perhaps it is the same with humans and myth-making: it is what we do, living or dying.

The Inner Worlds and Outer Abyss of Roy Scranton

We’re Doomed: Now What?, New York: Soho Press, Inc., 2018.

Scranton’s long awaited new book is a collection of essays, articles, reviews, and editorials.  It begins with a beefed-up version of his New York Times editorial “We’ re Doomed. Now What? [https://opinionator.blogs.nytimes.com/2015/12/21/were-doomed-now-what/] —which distills some of the themes of his earlier book, Learning to Die in the Anthropocene.  The new book is organized into four sections.  The first is on the unfolding climate catastrophe.  The second is on his experiences of the war, followed by “Violence and Communion” and “Last Thoughts.”  Given the fact that Scranton’s most conspicuous importance is as a writer—as a clear-sighted prophet of the environment—this arrangement makes sense, even though his vision of the future comes from his experience as a combat infantryman.

When Scranton limits himself to his own observations and experiences, he is powerful, poetic—the Jeremiah of his generation and possibly the last Cassandra of the Holocene, the world as it was.  He is a writer of true genius and a master storyteller of startling eloquence who writes multilayered prose with finesse and grace.  If there is any flaw, it may be a slight tendency toward overwriting, but this is an insignificant aesthetic consideration.  He also tends to assert more than reveal, but then he is not a novelist.   

When he listens to his own muse or discusses other first-person commentators on war, he is magnificent.  When he references great philosophers, he is earnest but perhaps slightly didactic, his interpretations more conventional.  When he references recent philosophers, especially postmodernists like Derrida, Foucault, and Heidegger is only slightly more tolerable than anybody else dropping these names and their shocking ideas (one can only hope that he has read some of Chomsky’s works on scientific language theory as well as his ideas on the environment, but I digress).  I also take issue with some of his interpretations of Nietzsche, but these are the quibbles of a philosophy minor and the book is mostly outstanding and should be read.                                            

His writing on war is insightful both taken on its own and chronologically as a preface to his writing on the environment.  He is not only a keen observer who knows of what he speaks, he is completely fluent in the corpus of war literature drawn from experience.  If Scranton turns out to be wrong about the terminal nature of the environmental crises, his writing on war will likely endure as an important contribution to the canon in its own right.  In my library, his book will alternate between shelf space dedicated to the environment and somewhere in the neighborhood that includes Robert Graves, Wilfred Owen, Siegfried Sassoon, Ernst Junger, Vera Britton, Eugene Sledge, and Paul Fussell.  The essays on war are reason enough to buy the book.  Certainly every Neocon, every Vulcan or humanitarian interventionist whose first solutions to geopolitical problems in important regions of the developing world is to drop bombs or send other people’s children into harm’s way should read all of Scranton’s war essays.  

There is perhaps one substantial point of contention I have with this book, and I am still not sure how to resolve it, whether to reject my own criticism or to embrace it.  Scranton begins this collection with his powerful “We’re Doomed. Now What?” but ends it with an essay, “Raising a Daughter in a Ruined World,” that appeared in the New York Times around the same time that the new book was released during the summer of 2018.  Regardless of whether or not one agrees with its thesis, there is an uncompromising purity of vision in the earlier book and most of the essays of the new one.  

In the last essay of the present book, Scranton writes with his characteristic power, insight, and impossibly good prose.  But then he seems to pull a punch at the end.  Sure we’re screwed and there is little reason for hope, but the nature of the doomsday scenario is a little less clear in the last essay: does the near future hold the extinction of our species along with so many others, or is just some kind of transformation?  Is the world merely ruined or about to be destroyed?  To be fair, nobody knows how bad things will be beyond the tipping point.  If he begins the book with a knockout hook, he seems to end it with a feint that, while not exactly optimism, is something less than certain death—a vague investment in hope with real consequences.

I get it: kids force compromises and force hope along with worry and his intellectual compromise (tap dance?) may be that there is a glimmer of hope.  Even though the abyss looks into you when you look into it, most of us would blink at least once, even in a world that may (or may not) be dying.

He rightfully asks “[w]hy would anyone choose to bring new life into this world?” and then spends part of the essay rationalizing an answer that is very much in keeping with the theme of the myths of personal meaning he prescribes in Learning to Die in the Anthropocene.  Kids force hope, but who forced, or at least permitted the child’s existence to begin with?  It is none of my business, except that Scranton is a public commentator who brought up the point publicly.  The problem is that the new creature did not ask to be a part of someone’s palliative prescription.  For while there are many shades of realism, one cannot be half a fatalist any more than one can be half a utopian.  Or as a friend of mine observed, “[T]he problem with taking responsibility for bringing a child into the world is that it precludes rational pessimism.”  

The more general problem is that this acknowledgment of possible hope forces him from an uncompromising position of doom of his earlier book and most of his articles in the new one to conclude with a somewhat more conventional and less interesting Anthropocene position—one that admits that the world is ruined (i.e. too far gone to be saved through robust mitigation), and so rather than try to reverse the damage we must adapt.  In reviewing his previous book, I noted that a fatalistic point of view risks premature surrender, but here my criticism is more with his newfound rationale for solutions than with his all-too-human flinch per se.  

Learning to Dies in the Anthropocene gives us a basis for a personal approach to the world’s end; in “Raising a Child in a Doomed World,” [https://www.nytimes.com/2018/07/16/opinion/climate-change-parenting.html Scranton states that individual solutions—other than suicide on a mass scale (although one can only wonder what kind of greenhouse gases billions of decomposing corpses would produce)—cannot be a part of the solution in terms of fixing the problem.  Even with the possibility of premature surrender, the earlier, more personalized perspective is more interesting than the new one with non-forthcoming large scale prescriptions.  He throws out a few of the solutions common to the young (global bottom up egalitarian, global socialism), but has no illusions about the feasibility of these.

Even here there is honesty: he does not pretend to know how to fix things.  And so (during an August 8, 2018 reading and book signing at Politics & Prose in Washington, D.C.), he lapses into generalities when questioned: “organize locally and aggressively,” perhaps there will be a world socialist revolution (which he openly concedes is utopian, the realm of “fantasy,” yet at another point states that it “now seems possible”), do less and slow down (although in the last essay, he states that personal approaches can’t work), and learn to die (getting back to his previous theme).  

A couple of other minor points: the book’s title seems a bit too stark and spot-on for such a serious collection and is more in keeping with the placard of the archetypal street corner prophet of New Yorker cartoons.  Similarly, the cover illustration—the Midtown Manhattan skyline awash behind an angry sea—struck me as being a little tabloidesque, but what it is they say about judging a book by its cover?

Jedediah Purdy and the Democratic Anthropocene

After Nature, A Politics for the Anthropocene, Harvard University Press, 2015, 326 pages.   

Another of the most articulate voices under the umbrella of Anthropocene perspectives is Jedediah Purdy, now a professor of law at Columbia University Law School after 15 years at Duke.  Purdy is a prolific writer and this book—now a few years old—is by no means his most recent statement on the environment (for an example of his more recent writing, see https://www.nytimes.com/2019/02/14/opinion/green-new-deal-ocasio-cortez-.html).

After Nature is a wonder and a curiosity.  In the first six chapters he provides an intellectual history of nature and the American mind that is nothing short of brilliant.  His writing and effortless erudition are exceptional.  He is a truly impressive scholar.  This part of his book is intellectual history at its best. 

Purdy’s approach is to use the law as a reflection of attitudes toward the natural world.  Through a legal-political lens, he devises the successive historical-intellectual categories of the providential, romantic, utilitarian, and ecological, interpreting nature as the wilderness/the garden, pantheistic god, natural resources, and a living life support system to be tamed, admired, worshiped, managed, and preserved. 

These interpretive frames in turn characterize or “define an era of political action and lawmaking that left its mark on the vast landscapes.”  On page 27, he states that these visions are both successive and cumulative, that “[t]hey all coexist in those landscapes, in political constituencies, and laws, and in the factious identities of environmental politics and everyday life.”  He acknowledges that all of these perspectives exist in his own sensibilities.  In my experience, one is unlikely to come across better fluency, depth of understanding, and quality of writing on this topic anywhere, and one is tempted to call it a masterwork of its kind.

It is therefore all the more surprising that after such penetrating analysis, historical insight, and eloquence in describing trends of the past, his prescription for addressing the environmental problems of the present and future would go so hard off the rails into a tangle of unclear writing and a morass of generalities and unrealistic remedies.  It also strikes one as odd that such a powerful and liberal-minded commentator would embrace his particular spin on the Anthropocene perspective, given some of its implications.

In Chapter 7 “Environmental Law in the Anthropocene,” Purdy introduces some interesting, if not completely original ideas like “uncanniness”—the interface with other sentient animals without ever knowing the mystery of what lies behind it, of what they feel and think.  Before this, he discusses something calls the “environmental imagination”—an amalgam of power (“material”) interests and values.  After this he ventures into more problematic territory in his sub-chapter “Climate Change: From Failure to New Standards of Success.”

Purdy rejects the claims of unnamed others that climate change can be “solved” or “prevented” (these are his cautionary quotation marks, although it is unclear who he is quoting).   He writes about the “implicit ideas” of unidentified “scholars and commentators” (my quotation marks around his ideas) and their “predictable response” of geo-engineering to rapidly mounting atmospheric carbon levels (“a catch-all term for technologies that do not reduce emissions but instead directly adjust global warming”).  Again, I am not sure to whom he is referring here. Most people I know who follow environmental issues favor a variety of approaches to include the production and reduction of carbon production.

According to Purdy, this perspective begins with “pessimism” and the observation that “we are rationally incapable of collective self-restraint.”  This is reasonable enough, and Purdy recognizes that spontaneous self-restraint on a global scale has not been forthcoming.  Indeed it is hard to imagine how such collective action would manifest itself on such a massive scale short of a conspicuous crisis of a magnitude that would likely signal the catastrophic end of things as we know them (e.g. if we woke up one day and most of the coastal cities of the world were under a foot of water).  If this kind of awareness of a crisis was possible at a point where it was not too late to mitigate the crises, it could only be harnessed through the top-down efforts of states acting in concert.

With self-restraint not materializing, the “pessimism” of the environmental straw man switches to “hubris.” And both of these descriptive nouns then “take comfort” (just like actual people or groups of people in a debate) in an either/or conclusion “that if we fail to ‘prevent’ climate change or ‘save’ the planet from it then all bets are off; we have failed, the game is up.  This threat of failure and apocalypse then results in the “next step” of ‘try anything now!’ attitude of geo-engineering.” 

From here he concludes that “[b]oth attitudes manage to avoid the thought [idea] that collective self-restraint should be a part of our response, perhaps including refraining from geo-engineering: the pessimism avoids that thought by demonstrating, or assuming, that self-restraint would be irrational and therefore must be impossible; and the hubris avoids it by announcing that self-restraint has failed (as it had to fail ‘rationally’ speaking), it was unnecessary all along anyway.”

Purdy then “propose[s] a different way of looking at it” and calmly announces that “climate change, so far, has outrun the human capacity for self-restraint” [so, the attitude of “hubris” is right then?], it is too late to save the nature as it was (“climate change has begun to overwhelm the very idea that there is a ‘nature’ to be preserved”), that we should learn to adapt.”  In the next paragraph, he states “[w]e need new standards for shaping, managing, and living well.  Familiar standards of environmental failure will not reliably serve anymore [does he mean metrics of temperature, atmospheric and ocean chemistry, and loss of habitat/biodiversity?] .  We should ask, of efforts to address climate change, not just whether they are likely to ‘succeed’ at solving the problem, but whether they are promising experiments—workable approaches to valuing a world that that we have everywhere changed.”

For a moment then, there is a glimmer that Purdy might be on to something by embracing a Popper-like outlook of experimentation and piecemeal problem solving/engineering.  The question is how to implement an approach of bold experimentation. 

My own view is that on balance, the environmentalist of recent decades have been clear-sighted in their observations and that their “pessimism” is warranted.  As with Malthus and the inexorable tables of population growth, I would contend that they are right except perhaps for their timetable.  Is the dying-off of the world’s reefs and the collapse of amphibian and now insect populations all just the pessimism and hubris of fatalistic imaginings?

How then should we proceed?  Even with the implosion of the The End of History narrative, Purdy, like so many of his generation and the younger Millennials, seems to have a child’s faith in the curative powers of democracy.  His concurrence with Nobel laureate, Amartya Sens’s, famous observation that famine has never visited a democracy appears to be as much of an uncritical Fukyama-esque cliché as the assertion that democracies do not fight each other (malnutrition on an impressive scale has in fact occurred in Bangladesh and in the Indian states of Orissa and Rajasthan—i.e. regions within a democratic system). 

Purdy then asserts a kind of democratic or good globalization in contrast to the predatory, neoliberal variety that he rightful identifies as a leading accelerant of the global environmental catastrophe.  He writes that “[p]olitics will determine the shape of the Anthropocene.”  Perhaps, but what does “democracy” mean to the millions living on trash heaps in the poorer nations of the world?  What does it mean in places like Burma, the Congo, and Libya? 

A savant of intellectual history, Purdy seems to know everything about the law and political history as a reflection of American sensibilities.  But politics and the law (like economics and the military) are avenues and manifestations of power—even when generous and high-minded, the law is about power—and one is left wondering if Purdy knows how power really works.  

In the tradition of Karl Popper’s The Open Society and its Enemies, I would contend that the primary benefits democracy (meaning the representative democracy of the better liberal republics), are practical, almost consequentialist in nature, rather than moral. First, it is an effective means of removing bad or ineffective leaders and a means of promoting “reform without violence;”27 Second, it should ideally provide a choice in which a voter can discern a clearly preferable option given their interests, outlook, and understanding.

The idea of a benevolent democratic genera of globalization and a “democratic Anthropocene” is reminiscent of academic Marxians of a few decades ago who waited for the “real” or “true” Marxism to kick-in somewhere in the world while either shrugging off its real-world manifestations in the Soviet Union and the Eastern Bloc, China, Cuba, and North Korea as false examples, corrupt excrescences, or else acknowledging them as hijacked monstrosities.

Whether in support of Marxism or democracy, this kind of ideological stance allows those who wield such arguments to immunize or insulate their position from criticism rather than constructively welcoming it, inviting it.  It could be argued that concepts of egalitarian democratic or socialistic globalization is to the current generation what Marxist socialism was to American idealists of a century ago.  In the early twentieth-century, majority of Americans had the realism and good sense not to accept the eschatological vision and prescriptions of the earlier trend.  As numerous writers have noted, populism is just as likely to take on a reactive character as it is a high-minded progressive ideology.  As economist Robert Kuttner and others have observed, some of the European nations whose elections were won by populist candidates can be described as “illiberal democracies.” [See Robert Kuttner, Can Democracy Survive Global Capitalism?, 267].

The fact that some of the most brilliant young commentators on the environment, like Purdy and perhaps Scranton (even with his admission that global socialism is possibly utopian)—to say nothing of veteran commentators on the political scene, like Chris Hedges (America the Farewell Tour)—embrace such shocking unrealism, leaves one with a sense of despair over the proposed solutions as great as that with the crises themselves.  It is like pulling a ripcord after jumping out of an aircraft only to find that one’s parachute has been replaced with laundry. 

To be fair, nobody has a solution.  Edward O. Wilson has lamented that humans have not evolved to the point where we can see the people of the world as a single community.  Even such a world-historical intellect as Albert Einstein advocated a single world government. [See Albert Einstein “Atomic War or Peace” in Out of My Later Years, 185-199].  If the proliferation of nuclear weapons and the possibility of the violent destruction of the world could not force global unity as a reality, what chance do the environmental crises have?  As George Kennan observes, the world will never be ruled by a single regime (even the possibility that it will be ruled entirely under one kind of system seems highly unlikely).  Unfortunately, he will probably be right.

Purdy rightfully despising the neoliberal Anthropocene wrought by economic globalization.  But perhaps this is the true nature of globalization: aggressive, expansionistic, greed-driven, blind to or uncaring of its own excesses, and de facto imperialistic in character.  William T. Sherman famously observes that “[w]ar is cruelty, and you cannot refine it.”28  So it is with globalization, whether it be mercantilist, imperialist, neoliberal, or some untested new variety.  

Globalization is economic imperialism and it likely cannot be reformed.  The whole point of off-shoring industry and labor arbitrage is to make as big of a profit as possible by spending as little money as possible in countries with no tax burdens and few, if any, labor and environmental laws, and people willing to work for almost nothing.  Globalization is the exploitation of new markets to minimize costs and maximize profits. While the purpose of an economy under a social democratic model is to provide as much employment as possible, neoliberal globalization seeks a system of efficiency that streamlines the upward flow of wealth from the wage slaves to the one percent.  

It is conceivable that someday in the distant future the world will fall into an interlinked global order based on naturalistic economic production regions and shifting import-shifting cities, as described by Jane Jacobs.  But that day, if it ever comes, is both far off and increasingly unlikely and there exists no roadmap of how to get there.29  Certainly a sustainable, steady-state world would have to be more-or-less egalitarian as a part of fundamentally re-conceptualizing the human relationship with nature. But this too is a long way down the road and would have to be imposed by changing circumstances forced by the environment.  We need solutions now, and the clock is ticking.  

For the short term—for the initial steps in a long journey—the best we can hope for is modest and tenuous cooperation among sovereign states to address the big issues facing us: a shotgun marriage forced by circumstances, by intolerable alternatives (an historical analogy might be the U.S.-Soviet alliance in the Second World War, and the effort will have to be like a World War II mobilization only on a vastly larger scale).  We will need states to enforce change locally and international agreements will have to establish what the laws will be.  The problem here is the internal social and political divisions within states that are unlikely to be resolved.  Moreover, immediate local interests will always take priority over what will likely be seen as abstract worldwide issues.  In order to prevent such internal dissent and tribalism, and building on Jacobs’ idea—an ideal world order would have to consist of small regional states that are demographically homogenous (another idea of David Isenbergh).    

Purdy rightfully disdains the disparities of neoliberal globalization but only offers an ill-defined program in which “the fates of rich and poor, rulers and rules” would be tied (presumably the ruling classes would allow the ruled to vote away their power).  The idea here is that famine is not the result of scarcity but rather of distribution.  If such control and reconfiguration is already possible, then why has it failed to date?  If it is possible in the near future, then why stop there?  Why not banish war and bring forth a workers’ paradise?  Why not bring to fruition solutions to take the measures to save the planet prescribed by the environmental “pessimists”?  Why not Edward O. Wilson’s Half-Planet goal (see below)?

As regards practicalities of democratic globalization, Purdy’s prescriptions also seem to ignore some inconvenient historical facts.  For instance as many commentators have noted, the larger and more diverse a population becomes, the less governable it becomes and certainly the less democratic as individual identities and rights are subordinated to the group.  The idea of a progressive social democracy with a very large and diverse population seems unlikely to the point of being a nonstarter.30    

Democracy works best on a local level where people are intimately acquainted with the issues and how they affect their interests—the New England town hall meeting being the archetype for local democracy in this country.  Similarly, the most successful democratic nations have tended to be small countries with small and homogenous populations.  Trying to generalize this model to a burgeoning and increasingly desperate world any time soon is a pipedream.

Ultimately, the problem with the prescription of universal democracy in a technical sense is that democracy, like economies, are naturalistic historical features and are not a-contextual constructs to be cut out and laid down like carpet where and when they are needed.  Democracy must grow from within a cultural/historical framework. It cannot effectively be imposed any more than can a healthy economy.  As Justice Holmes observes in a letter to Harold Laski, “[o]ne can change institution by a fiat but populations only by slow degrees and I don’t believe in millennia.” 

Purdy also seems to conflate democracy with an ethos of liberalism.  Democracy is a form of government by majority rule where liberalism is an outlook based on certain sensibilities.  If a fundamentalist Islamic nation gives its people the franchise—or if a majority of people in an established republic adopt an ideology of far right populism—they will likely not vote for candidates who espouse their own values and interests. Transplanted world democracy and the redistribution of wealth are not likely to work even if the means to implement them existed.

As for the democratic Anthropocene—or any kind of Anthropocene world order—I think that John Gray gets it mostly right, that things will never get that far.  In order to understand the impracticality of this idea, we might consider a simple thought experiment in which we substitute another animal for ourselves.  It is difficult to imagine a living world reduced to a monoculture of a single species of ant or termite, for instance.31  And while humans, like ants (e.g. leafcutters), may utilize various resources of a robust environment of which are but but a small subset, it is difficult to imagine nature surviving as a self-supporting system in a reduced state as the symbiotic garden (Gray’s “green desert”) along the periphery of an ant monoculture.  And so we ask: if not ants, then why humans?

In terms of Boolean logic, the reduction of nature to a kept garden—and I am not saying that Purdy goes this far—appears to be an attempt to put a larger category (nature) inside of a smaller one (human civilization), the equivalent of attempting to draw a Venn diagram with a larger circle inside of a smaller one.

Beyond the lack of realism there is also an unrealized immorality to the more extreme Anthropocene points of view.  Letting nature and the possibility of its salvation be lost is a kind of abdication that is not only monumentally arrogant but also ethically monstrous and on a scale far greater than historical categories like slavery or even the worse instances of genocide.  One can only wonder if adherents to the Anthropocene perspectives realize the implications of their prescriptions. 

We now know that the living world is far more conscious, thinking, feeling, more interconnected than we ever before suspected.32 Even the individual cells of our bodies appear to possess a Lamarckian-like interactive intelligence of their own, and we can only begin to guess at the complexities of the overlapping systems of the world biosphere.33  There is no possibly way we can know the implications of lost interrelation of whole strata and echo systems.  To think that we can manage a vastly reduced portion of the living world to suit our needs is as unethical as it is impractical.  

To give up and say that the world is already wrecked is not the same things as saying that some abstract or hypothetical set or singular category will be lost, but rather that a large part of the sentient world will be destroyed by of us.  To put it more bluntly, how can allowing nature be destroyed—meaning the extinction of perhaps a million or more species and trillions of individual organisms—without attempting the largest possible effort to prevent it, be any less of an atrocity than the Holocaust or slavery?  In an objective biological calculus of biodiversity, it will be many fold worse, even if the ecological declines occurs over a period of lulling gradualism, of terraces of change and plateaus, and human adaption.  A child who has never seen a snowy winter day, snowy egret, or a snow leopard will not miss them any more than a child today misses a Carolina parakeet or Labrador duck.  At worst they will experience a vague sadness for something they never knew, assuming they are even taught about such lost things.  

I mention this (and again, I am not saying that Purdy advocates such a position) because I would like to think that those who subscribe to the Anthropocene perspectives would have willingly fought in WWII, especially if they had been aware of the atrocities of the Nazis and Imperial Japanese.  And yet in a mere two sentences, the author seems to decree an unspecified portion of the living and sentient world to be permanently lost:

“As greenhouse-gas levels rise and the earth’s systems shift, climate change has begun to overwhelm the idea that there is a “nature” to be saved or preserved.  If success means keeping things as they are, we have already failed, probably irrevocably.”

No “nature’ to be preserved”?  What could this possibly mean?  Could the author mean it literally, that that the living world (to include humans) is lost?  Could he mean “nature” as metaphor (whatever that means)?  As a defunct concept or “construct” of the kind that posmodernists love to contend as half of a false dichotomy?  Are environments like rainforests and reefs metaphors and human constructs?  Since this is a work of nonfiction, I will take him at his literal word, but readily concede that I might be misunderstanding this and other points of his.

And the solution:

“We need new standards for shaping, managing, and living well in a transformed world.”

“Living well,” huh? What could this mean in a world soon to have 8 billion mouths to feed (Scranton, by contrast, tells us that we must learn to die well)?  How is this not Anthropocentrism?  Observe the logic here: when the alternatives are likely failure and unlikely success don’t even try to correct the problem or fix your style of play, simply change the standards and hope for the best.  Move the goalposts to the suit the game you intend to play.  When reality becomes unacceptable, just diminish your expectations and change the parameters of the discussion.  When the Wehrmact overruns Poland, France, and the Low Countries, just write off these areas as newly acquired German provinces and then do business with the new overlords.  After all, solutions have not been forthcoming to date.  He is right that things look beak for the world, but then things looked pretty bleak in 1939 and 1940.

My sense is that beyond the brilliance and kindly nature, there is a kind of desperation rather than resignation in this outlook.  In his book Purdy asserts the stern banality that “nature will never love us as we love it” as if that was somehow related to the issue, as if to chastise naïve tree huggers with the fact that their embrace is unrequited.  But one gets the sense that he might just as easily be chiding a younger, Thoreau-like Jed Purdy over a lost love that never loved him back.  If an intelligent realization of the amorality of nature has forced him to relinquish the mistaken idea of a beloved and loving nature, perhaps he cannot let go of the universalist ideals of liberal democracy, even above the survival of much of the natural world itself.  A person must believe in something, and it is easier to accept the death of something that never loved us in return.  If we do not hold on to something, what then remains of belief, youthful optimism and of hope for the future beyond youth?


What Purdy offers is a liberal humanist “riposte” to the undeniable biological logic of the posthumanists like Gray and liberals who would extend rights to the non-human world.  Purdy brilliantly attempts to preserve liberal humanism, a wholesome human tie to the land, and the dignity (if not actual rights) of animals.   

As intellectual history, After Nature is impressive and besides minor infractions against the language no more serious than a modest penchant for words like “paradigmatic,” much of it is remarkably well-written.  But ultimately the importance of a book is found in the power of its ideas—its insights—rather than in the power of its presentation.  For all of its brilliance, After Nature ends up embracing hopeful speculative generalities that one may infer to be intended as superior and ahead of the pack while seeming to write-off much of the living world.  In his prescriptions he is provincial in his generational ideas—ideas full of historical analysis but shorn of real historically-based policy judgment, ideas which by his own admission will not preserve nature, which he deems a defunct concept and reality. 

A great analyst may fail as a practical policy planner and the stark contrast of this book as legal and political history relative to its prescriptions suggests that this is the case.  Just because you are smart doesn’t mean you are sensible in every case, and just because you write well doesn’t mean you are right.  Great eloquence runs the risk of self-seduction along with the seduction of others; many legal cases are won by the persuasion of presentation rather than on the proximity of the claims of the winning argument to the truth of the matter.  Purdy clearly knows history, but in my opinion, he does not apply his remarkable interpretation of the past toward a realistic end.  As with some lawyers-turned-historians I have known, he seems to overestimate the power and influence of the law and political form (it was not the Confiscation Acts nor, strictly speaking, nor the Emancipation Proclamation that destroyed slavery, but rather the Union Army; where the law is not enforced, the law ceases exist as a practical matter), to include those of “democracy” on the course of human events.

Purdy does not face the human fate that Scranton characterizes in Learning How to Die in the Anthropocene.  This is understandable.  What is standalone brilliance and ambition in a dying world?  If Scranton is sensitive and intelligent, Purdy is too, perhaps even more so, and he has not seen Iraq. 

The Grand Old Man of Biology and His Half-Earth

Half-Earth, Our Planet’s fight for Life, New York: W.W. Norton & Company, 2016, 259 pages

The human species is, in a word, an environmental hazard.  It is possible that intelligence in the wrong kind of species was foreordained to be a fatal combination for the biosphere.  Perhaps a law of evolution is that intelligence extinguishes itself.

-Edward O. Wilson

This admittedly dour scenario is based on what can be termed the juggernaut theory of human nature, which holds that people are programmed by their genetic heritage to be so selfish that a sense of global responsibility will come too late.

-Edward O. Wilson

Darwin’s dice have rolled poorly for Earth.

-Edward O. Wilson

In contrast to the three authors I have discussed so far, Edward O. Wilson is an actual scientist.  As one might expect, he is non-judgmental but equally damning his measured observations of the devastation wrought by our kind.34  He is genial and understanding of human flaws, fears, and the will to believe, but retains few illusions an in some ways his analysis is as dire as Gray’s (Wilson coined the term Eremozoic/Eremozcene, the “Era of Solitude”—which he prefers to Anthropocene).35Unlike the others, Wilson tells us what must we must do to save the planet.  He does not tell us how.

What sets him apart from the others is that he is a world-class biologist, the world authority on ants, and one of the founders of modern sociobiology.  He is intimately acquainted with the problem and has an understanding of how natural systems work that is both broad and deep.  As regards his writing, he is gentle—a good sport by temperament—and has sympathy with people and the human condition with all of its quirks and many faults.  It is striking that this gentleness does not diminish or water down his observations.

Wilson has written a great deal—9 books while over the age of 80—and has apparently changed his mind on some important issues over the years.  He believes that humans cannot act beyond the natural imperatives that shaped us as creatures, but he does believe that we can learn and change our minds.  It is therefore noteworthy and not a little ironic that John Gray believes that our behavior is inevitable, yet one senses a tone of judgment, while Wilson believes that we may have a choice in what we are doing, and yet is forgiving, even sympathetically coaxing.

In his 2016 book, Half-Earth, Wilson, offers as a solution—a goal rather than a means of achieving it—with the same hemispheric name, a thesis stating that, insofar as possible, in order to save the biosphere, it is necessary to preserve as much of the world’s biodiversity as possible.  To do this, he believes that we must preserve half of the world’s land surface as undisturbed, self-directed habitat.

In a book note in the March 6, 2016 edition of The New Republic titled “A Wild Way to Save the Planet” [https://newrepublic.com/article/130791/wild-way-save-planet], Professor Purdy reviewed Wilson’s book with some prescience and little charity.  Purdy raises some interesting points and is correct that Wilson does not offer a practical step-by-step program or a roadmap toward this goal.  He is also right that Wilson is not at his best when speculating on the natural adaptive purpose of the free market or on population projections and that he demonstrates a certain political naïveté, but then his importance is not as a social engineer or a practitioner of practical politics. He is a leader of the biodiversity movement and a foundation dedicated to this bears his name.  He is also a Cassandra with the most impressive of credentials relative to his topic.  In terms of contributions and historical reputation, Wilson, who will be 90 next month (June 2019), is the most distinguished of the five commentators discussed here.

In his analysis, and after a grudging if mostly accurate overview of Wilson’s positions and accomplishments,36 Purdy seems to miss the significant of Wilson’s book as a poetic (as opposed to purely analytical) thesis: if we want to save the planet and ourselves, we must preserve the world’s biodiversity and the unfathomable complexity of symbiosis and interconnection of the living world.  If we want to save Nature thus construed, we must dedicate about half of the planet to just leaving it alone (indeed, a plausible argument can be made that, other than setting aside wild areas, the degree to which humankind meddles with nature—even with good intentions—the more harm we do).

Although niches of individual species lost may be quickly filled in an otherwise rich environment, we cannot begin to imagine the implications of the structural damage we do to the overall ecosphere through wholesale destruction of habitat and species.  There may be impossibly complex, butterfly theory-like ripples leading to unforeseen ends.  Damage to the environment is often disproportionate to what we think it might be.37  Nor should we concede that the natural world is hopelessly lost already (in stating that “[i]f success means keeping things as they are, we have already failed, probably irrevocably” Purdy reveals himself to be darker than the “pessimists” who still seek mitigation), and that the goal of some writers on the Anthropocene may be little more than managing what remains of nature.  In contrast, Wilson is not making a “wild” suggestion.  He is telling us what we must do to save the biosphere and ourselves with it.  In this assessment I believe he is correct.

Wilson sees the Anthropocene outlooks and their monocultural goal as pernicious anthropocentrism—a Trojan horse of human arrogance cloaked in the language of stern environmental realism.  He believes that they prescribe a greatly reduced human-nature symbiosis with humans as the senior partner.  Purdy dismisses this assertion in a few clipped assertions with a confidence that underlies so much of his analyses here and elsewhere.  But Wilson’s experience with both the Nature Conservancy and in the academy and statements by the people he cites bears out his beliefs (to be fair, there are degrees of the Anthropocene perspective ranging from the comparatively mild to the extreme). 

Regardless, Purdy does not speak for all Anthropocene points of view—more extreme adherents do in fact couch their positions in terms of a stark and dismissive pseudo-realism that are arrogant.  Purdy seems to concede the danger of “a naturalized version of post-natural human mastery” in his book (pp. 45-46).  As for the prescriptions of the Anthropocene perspective Wilson criticizes in Half-Earth, it would seem that they are no more realistic than those of a cancer patient who acknowledges his disease but not its lethality, or else realizes its seriousness and then adopts a cure that will allow the disease to kill him.  Purdy asserts that Wilson’s goal is itself a reflection of just another Anthropocene outlook.

Does Wilson’s book posit an Anthropocene thesis?  Adherents to the Anthropocene define it variously as the state of affairs where nature has been irreparably damaged or altered by the activities of mankind, and as the dominant species we are thrust into the position of dealing with it one way or another.

Purdy characterizes the Anthropocene as a current that “is marked by increased human interference and diminished human control, all at once, setting free or amplifying destructive forces that put us in the position of destructive apprentices without a master sorcerer.  In this respect, the Anthropocene is not exactly an achievement; it is more nearly a condition that has fallen clattering around our heads.”38 

This is fair enough.  But it is not so much an acknowledgement of the Anthropocene as a fact or a state of affairs that concerns our analysis of Wilson’s outlook (or the term we use to describe it) so much as whether or not his view is an Anthropocene perspective like the ones he criticizes in Half-Earth, and with which Purdy at least in part concurs with in After Nature (i.e. one that has accepted the ruin of the biosphere and which prescribes adaptation over mitigation).

Lawyers quibble over definitions far more than do scientists.  The sides of a good faith critical discussion should agree on terms and proceed from there. Although I find questions over definitions to be inherently uninteresting and unimportant distractions, since Purdy makes the claim that Wilson’s Half-Earth thesis is an Anthropocene argument by another name, we might briefly examine if it is.39  

Is the Half-Earth hypothesis an Anthropocene argument?  I think the answer is “no.”  First of all, Wilson admits that the problem is real, that the biomass of the human species is more than 100 times that of any large animal that ever lived.  But he also believes that the vast majority of species that comprise the current biodiversity of the world can still be preserved (i.e. the Eremocene/Anthropocene is where we are heading, but we are not yet there in any final sense).  This can be done by preserving half of the planet as habitat.  This is not a prescription for a human monoculture with a diminished natural periphery or greenbelt, but the opposite: an accommodation of the natural world as a thing apart from us, a steady-state, hands-off stewardship while curbing our own excesses.  It is mitigation.

My sense is that Wilson’s perspective of the natural world as a “self-willed and self-directed” prior category that is deserving of our protection as remote stewards capable of protection or destruction, is sound.  The biggest part of this protection would be simply leaving it alone rather than a subset to be managed—an adjunct category—or a thing permanently wrecked to be tolerated, and adapted to (as we adapt it to us) insofar as it meets or does not interfere with our needs. 40

But even if Wilson’s admission of the human impact on the biosphere and a set of policies to preserve half of it technically render his argument an Anthropocene perspective, there is still a substantive difference: the difference between attempting to manage nature, and leaving a large portion of it alone.  It is the difference between adaptation to and cultivating unfolding wreckage and mitigation through noninterference. 

In this sense, Wilson’s Half-Earth is not so much an Anthropocene thesis as it is an attempt to preclude a human monoculture by setting aside half of the planet through a policy of noninterference and not involved management.  In taking him at his word, I am inclined to say that Wilson seeks to avoid the Eremocene by preserving diversity, rather than an Anthropocene perspective that declares nature to be dead and aspires to somehow live well among the wreckage.

Purdy correctly writes off Wilson’s view of economic growth as “A naturalized logic of history” and calls it “technocratic” (“technocrat”/“technocratic”/”technocracy” are variations of a favorite smear among the post-Boomer generations, although the word appears to have multiple related but different definitions, one being “a specialized public servant.”  I wonder if they would lump the men and women who implemented the New Deal, the U.S. industrial mobilization during WWII, and the Marshall Plan into this category).  When reading the review I got the feeling that Wilson’s powerful sociobiological arguments rankles Purdy’s strong attraction to democratic theory and related philosophy based on human exceptionality. 

Ironically Purdy admonishes the author for providing no blueprint for implementing the half-planet model, yet offers nothing stronger than generalities about global democracy.  He also writes “[a]lthough Wilson aims for the vantage of the universe—who else today calls a book The Meaning of Human Existence?—the strengths and limitations of his standpoint of those of a mind formed in the twentieth-century.”  One could just as reasonably ask: who else today calls a book After Nature, regardless of whether “nature” is intended as metaphor, an outdated concept or construct, the living world and physical universe as things-in-themselves, or all of the above? 

Likewise the bit about “the mind formed in the twentieth-century” suggests a tone of generational chauvinism, a latter day echo of “[t]he torch has been passed to a new generation…” perhaps.  He dismisses Wilson’s love of nature as and his general outlook as parochial to the twentieth-century United States—and odd claim to make against the world authority on ants, the man who coined the term biodiversity, the standard-bearer of sociobiology, and a person who was bitten by a rattlesnake as a youth. 

The larger implication of Purdy’s dismissal of Wilson as a well-meaning but ultimately avuncular old provincial is itself a kind of local snobbery and presentism—the apparent assumption that anyone from an older generation is insufficiently evolved or sophisticated in his thinking to embrace the eschatological utopian clichés and bubbles of a later generation (Purdy was born in 1974, and so was therefore no less a product of the twentieth-century than is Wilson).  As such Wilson is a representative of just another misled perspective to be weighted against cutting-edge sensibilities, found wanting, and waved away in spite of a modestly good effort at the end of an impressive career. 

I would venture that Wilson knows both nature and history better than Purdy in terms of experience—he lived through the Great Depression which was also the period of the regional ecological disaster called the Dust Bowl and was a teenager during the Second World War.  These are hardly events likely to instill an excessively benevolent or uncritical view of nature or human nature.  Purdy may be right about the devastation wrought by neoliberal globalization, but I believe he is wrong about Wilson and his goal.  Both men concede the necessity of reconfiguring the human relationship with the planet.  Wilson calls for a “New Enlightenment” and a sensibility “biophilia” [regarding the latter, see The Future of Life, 134-141, 200]  Purdy dismisses Wilson’s feelings toward nature as just more unrequited love.  And yet Wilson’s biophilia does not seem incongruent with Purdy’s own “new sensibilities”.           

When reading Purdy’s review of Wilson’s book, I was reminded of a story of an earlier legal prodigy, Oliver Wendell Holmes, Jr., who, when as a senior at Harvard, presented his Uncle Waldo with an essay criticizing Plato. Emerson’s taciturn reply: “I have read your piece.  When you strike at a king, you must kill him.”41  In spite of some good observations about weak points in Wilson’s outlook (and especially in areas outside of his expertise), Purdy’s review didn’t lay a glove on the great scientist or his general prescription.

Where Purdy is right is in the failure of human self-restraint to materialize on a scale to save the planet.  Decades of dire warnings from environmentalists have failed to arouse the world to action.  It seems unlikely that Wilson’s prescription will be anymore successful.  What is required is drastic, top-down action by the nations of the world.  I will discuss this in a later post. 

My reading of Wilson is that the Half-Earth goal is what needs to be done in order to save the world’s biodiversity to include humankind as a small categorical subset.  He leaves the messy and inconvenient details to others.  Wilson and his idea are very much alive and if we wish to remain so, we must take it to heart.  As a person schooled in realism, I have long believed that if necessary measures are rendered impracticable under the existing power or social structure, then it is the structure and not the remedy that is unrealistic.  But the prescription has to be possible to begin with.  Let this be the cautionary admonition of this essay.      

My sense is that Wilson is right, but that his prescription is unlikely to be realized.  In my next post, I will offer what I believe could be a general outline to save the planet from environmental catastrophe.  

Adam Frank and the Biosphere Interpretation: the Anthropocene in Wide-Angle

Adam Frank, Light of the Stars, Alien Worlds and the Fate of the Earth,                               New York: W.W. Norton & Company, 1018, 229 pages.

Disclaimer: I am currently still reading this book (Frank gave an admirable summary of his ideas in an interview with Chris Hedges on the program On Contact).

Any book with endorsements by Sean Carroll, Martin Rees, and Lee Smolin on the dust jacket is likely to catch the attention of those of us who dabble in cosmology.  Adam Frank’s book is not about cosmological speculation or extrapolations of theoretical physics.  It is about the environment in the broadest of contexts.  It characterizes two distinct but overlapping worlds that ultimately merge.  The first is a view of life in a cosmic sense and the other is about life and civilization in a human context and scale.

On the first point, Frank sees the Anthropocene as just another transition: humans may be causing mass extinctions, but as mammals we are equally the product of a mass extinction (the extinction of the dinosaurs allowed mammals to rise to come to the fore).  Hey, these things happen and some good may come out of them—we did.  Life will go on even if we don’t and if we ruin the world as we knew it, relax—nature with deal with it after we are gone and will create something altogether new out of the wreckage.  The Anthropocene may be bad for us—and many of our contemporary species—but we are simply “pushing” nature “into a new era” in which Earth will formulate new experiments (as all life, individual creatures, species, and periods of natural history are experiments).  We are just another experiment ourselves, quite possibly a failed one (and, if we really screw things up, the Earth might end up as a lifeless hulk like Mars or Venus). 

This larger amoral picture—although undoubtedly true—seems ironic coming from someone as affable, as glib as Frank.  But the wide-angle gets even wider.  When talking in astrological terms, it is inevitable that any thinking reed will be dazzled by the numbers and characterizations of the dimensions of the night sky, of our own galaxy and the uncounted billions of others scattered across observable universe beyond it.  In this respect, Frank (like any astronomer or astrophysicists throwing numbers out about the cosmos) does not disappoint.  If he had left things here, I would conclude that he is likely right, but that no thinking, feeling being could surrender to such fatalism without a fight. After all nature makes no moral suppositions, but moral creatures do.  But he does not stop there.   

Over the expanse of our galaxy (to say nothing of the observable universe), it is likely that life is common or at least exists in numerous places among the planets orbiting countless trillions of stars in the hundreds of billions of galaxies.  It seems likely that humans are rendering the Holocene as a failed phase of the experiment, because it produced us.  But life will likely persist in some form regardless of how things turn out here.

Where Frank transitions from the very large to the merely human, he synthesizes the amorality of Gray with the mythmaking of Scranton toward an end perhaps along the lines of Wilson.  Unlike Gray there is no tone of judgment or chastisement.  On the contrary, he believes that the whole good versus bad placing of blame of the various “we suck” perspectives should be avoided: our nature absolves us from judgment; we are just doing what any intelligent (if immature) animal would do in our situation.

Frank analogizes humankind to a teenager—an intelligent, if inexperienced, self-centered willful being who assumes that his/her problems are uniquely their own and therefore have never been experienced by anyone else before.  He assumes that the sheer numbers of planets in our neighborhood of the Milky Way suggest that there are plenty of other “teenagers” in the neighborhood, some of whom have died of their folly and the inability to change their ways.  Others may have learned and adapted.  As for us, we need to grow up, change our attitude, and learn to sing a new and more mature song.  Frank sees the human capacity for narrative as the way out, except, unlike Scranton, he believes new myths to be our potential salvation rather than just a way to die with meaning.

In an interesting parallel to Frank’s view of humans as cosmic teenagers, Wilson characterizes us and our civilization in the following terms: “We have created Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology.  We thrash about.  We are terribly confused by the mere fact of our existence, and the danger to ourselves and the rest of life.” [See Ch. 1 “The Human Condition” in Wilson’s The Social Conquest of Earth, p. 7].  So how are we supposed to grow up?

According to Frank, in order to reach a steady-state level of human life on the planet, we need new myths about what is happening in order to drive “new evolutionary behavior.”  We need narratives that will not only allow nature to proceed (a la Wilson), but which actually enhances nature—make a vibrant biosphere that is even more productive.  The new narratives would provide “a sense of meaning against the universe.”  They will be a way out.   On this point he is like Wilson in his attempt to merge the arts and science to address the problems and embrace an all-loving biophilia.

As with Purdy, Scranton, and Wilson, Frank believes that a global egalitarianism would be necessary to achieve a steady state.  Once again the problem is how to do it.  How do we generate these narratives in a world where some powerful leaders do not concede that there is even a problem?  If the threat of nuclear annihilation and the urging of a world-historical intellect like Albert Einstein after the bloodiest war in all human history did not push humankind even an inch toward merging into a single egalitarian tribe, one must wonder if anything can (and the history of the past century shows, that when you redistribute wealth you only standardize misery).  In 1946 everybody believed that the atom bomb existed, while today, there are powerful interests and world leaders who still deny the reality of human-caused climate change. Human beings would have to completely reconfigure our relationship with nature and with each other and do it in the immediate future.  Could this be done even at the gunpoint of environmental catastrophe?  How would a candidate in a democratic system in a wealthy nation pitch such transformation to the electorate?  Again: how do we get there?  As they say Down East, you can’t get there from here.

Similarly, Frank’s analogy of humankind to a self-absorbed teenager is suggestive, but is the comparison supposed to fit into a context of a lifecycle that is historical or natural historical (i.e. is he talking about an adolescent in the context of human civilization as a phenomenon of 9,000 years, or of a species that is 200,000 years old?)?  If his idea is that our species has an outlook that is adolescent in terms of evolutionary development, then it seems unlikely that we can grow up quickly enough to become a bona fide adult, that the necessary maturity to turn things around will not occur in the timeframe in which the environmental crises will unfold.  Wilson talks in similar terms in at least one of his books, that we must start thinking maturely as a species un-tethered from old theistic myths and tribalism.  And yet the current state of affairs suggests that we are as far away from that point as ever, that such tribalistic tendencies as ethnic nationalism and fundamentalist religion are as strong as ever.  The human nature analogized by Frank and Wilson are not just sticking points to be overcome or hurdles to be jumped, but rather central facts of our animal nature that currently appear to be insurmountable.

One small issue I have with the book is the fact that the existence of life and civilizations on other planets is at this point purely conjectural.  The dazzling numbers Frank presents plausibly suggest that life may be fairly common—indeed, the numbers make it seem almost ridiculous to think otherwise.  But, if I recall my critical rationalist philosophy correctly, it is impossible to falsify probability, and at this point, such a claim is pure speculative probability rather than actual observation or corroboration.  He talks about a conjectural “great filter”—the idea that intelligent life kills itself off (if its maturity is far behind its intelligence).  Another pregnant conjecture.

What I liked especially was his description of James Lovelock and Lynn Marguis’s Gaia hypothesis, that life is an active “player” in the environmental crisis and that it is able to keep the atmosphere oxygen rich by preventing its combination with compounds thus resulting in an oxygen-free “dead chemical equilibrium” like the atmospheres of Mars and Venus.  The biosphere therefore acts as a regulator keeping oxygen at a near-optimal 21% level of the atmospheric mix (it was not clear to me how severe periods such as ice ages fit into the “regulation” of the environment). This regulated balance is called a “steady state” (Lovelock analogizes this to the way the body of a warm-blooded organism regulates its temperature).  Lovelock intended to call this idea the “Self-regulating Earth System Theory,” but at the urging of William “Lord of the Flies” Golding, settled instead on the more poetic “Gaia.” 

With an interested “in the question of atmospheric oxygen and its microbial origin,” Lynn Margulis, wife of Carl Sagan, teamed up with Lovelock in 1970.  As Frank notes, “[w]here lovelock brought the top-down perspective of physics and chemistry, Margulis brought the essential bottom-up view of microbial life in all its plenitude and power” [p. 125].  Frank observes that “[t]he essence of Gaia theory, as elaborated in papers by Lovelock and Margulis, lies the concept of feedback that we first encountered in considering the greenhouse effect” [p. 125] and “Lovelock and Marguils were offering a scientific narrative whose ties to the scale of world-building myth were explicit” [p. 127].  As an observation statement, it would seem that the Gaia hypothesis characterizing a “self-regulating planetary system,” an observable phenomenon is something close to a scientific organon supported by Lovelock’s ingenious “Daisyworld” thought experiment; whether or not the biosphere is a singular living entity that will eliminate humans as a pathogen would still seem to be a metaphysical assertion.  

Unfortunately, this is as far as I have read in his book

Conclusion 

In building on Frank’s example of humanity as an experiment flirting with failure, a friend of mine suggested the comparison of the individual human being in a time of collapse to an individual cancer cell.  Imagine that such a cell was somehow conscious and could reflect on its complicity in killing a person.  It might express regret yet philosophically conclude “but what can I do? I am a cancer cell.”  So it is with people and their kind.  Is this a denial of agency or a facing of facts?  Is it an admission that human beings—neither good nor bad in the broad focus of nature (although objectively out of balance with its environment)—are like cancer cells killing a person regardless of personal moral inclinations?  We are just the latest imbalance—like the asteroid (or whatever it was) that killed off the dinosaurs, and the other things that caused the other great extinctions of the Earth’s natural history.  And so we arrive back at John Gray and biological destiny. 

But even if we are cancer cells or merely a rapacious primate, I don’t accept such a fate—again, Nietzsche’s Will.  We are also a “thinking reed.” Even if there is no free will, there is still a will with an ability to learn from mistakes and experience—we must act as if there is free will.  Gray’s outlook might be a true position and yet no person as an ethical agency can morally abide by it.  We are audacious monkeys and have to answer two questions: can we rise above our biology through reason and moderation and solve the seemingly insurmountable problems resulting from our own nature, and will we?  I believe that he answer to the first is a cautions “yes,” The answer to the second question however may well render it an academic point.  

Consider the following historical thought experiment, also suggested to me by David Isenbergh:  Imagine if you could return to the late Western Roman Empire a few decades before it collapsed.  You see all of the imbalances, injustice, and misery from that period.  You identify yourself as a traveler from the future and tell the people you meet (you can obviously speak fifth-century Latin) that if they and their civilization did not reform their ways, there would be an apocalyptic collapse that would result in 500 years of even greater darkness and misery.  Suppose too that you even were even able to get this message to the powers that be.  Do you think you would be listened to, or would you be treated as mad as events continued unaltered on their way to disaster?  As I have noted elsewhere, in a world of the blind, a clear-sighted man would not be treated as a king, but rather a lunatic or heretic and would likely be burned at the stake if caught.   

In Malthusian terms, we are a global plague species.  In geological/astronomical terms, we are just the latest phenomenon to fundamentally alter and test the resilience of life on Earth.  But even if these observations are true, we are also moral beings, and to embrace them as inevitable and to recommend a posture of adaptation and wishful thinking that the planet will not deteriorate as far as the chemical equlibria of Mars and Venus, is the equivalent of justifying WWII by pointing to postwar successes of Germany, Japan, and Israel (as regards the former two, one could make the observation that sanity followed psychosis).

At the end of the review of Scranton’s Learning to Die in the Anthropocene, I asked: what is meaning in a dying world?  I will add only this: if the human story is coming to a close, then there is one great if austere luxury of being a part of this time that is as interesting as it is unsettling.  As individuals, we never know the full story of our lives until the very end (if even then).  If the end of progressive civilization is upon us in a matter of decades, then we have a greater and fuller understanding of the overall human project than any people at any time in history.  Rather than narratives of progress or decline, agrarian or democratic myths, historicist cycles or eschatology heading toward a terrestrial or providential endgame of history with salvation at the end, we may come to learn that history was just the progress of a plague species toward its own destruction by the means of its extended phenotype that we call civilization.

Finally, one of the things I have taken away from these six books and from my own discussions on the topic is that there are two powerful generational disconnects at play.  I have noticed a powerful generational disconnect common among older people (say, over 80) who have little or no idea of the scale of the problems facing us—that modernity, civilization, their species generally are already failed projects—but who have a certain understanding of history. 

The other disconnect is among young people who are far more in touch with ecological issues, see the problems for what they are, and whose various diagnose and potential remedies are at least on the scale of the problems, but whose prescriptions are unrealistic to the point of utopian absurdity.  On this point, Purdy and Scranton are anomalies who know history as well as anybody, but who seem to take after others of their generation (and the subsequent generation) in being unable to apply its lessons.  Frank and Wilson know natural history and yet also speak of a global egalitarian regime.  To be fair, nobody has an answer, and even the one I find to be most realistic, when walked through step-by-step ends up as being something akin to utopian itself.

Several times I have analogized the crises of the environment to the early phases of WWII.  The current situation is unlikely to unfold as quickly as that conflict, and it is difficult to know the point in the conflict at which we find our selves by analogy.  It is unclear whether we are at the point in history analogous to the doomed conference at Versailles, the Japanese invasion of Manchuria, the Occupation of the Rhineland, the Spanish Civil War, the Czechoslovakian crisis, the invasion of Poland or France, Operation Barbarossa, or the attack on Pearl Harbor.  As I noted in the introduction, it is also unclear when we will cross a point of no return.  Are we to be Churchills and Roosevelts, or are we to surrender to our fate? 

Sometime later this year or early next year, I hope to post another insufferably long discourse on how we might chance to turn things around.

Notes

  1. William Strunk, Jr. and E.B. White, The Elements of Style, New York: Macmillan Publishing Co., Inc., 3rd ed., 1979, pp. 71-72, 80.
  2. For Eremocene or “Age of Loneliness,” see Edward O. Wilson, Half-Earth, Our Planet’s Fight for Life, New York: W.W. Norton & Company, 2016, p. 20.  For Anthropcene, or “Epoch of Man,” see p. 9.
  3. David Archer, The Long Thaw, Princeton University Press, 2009, p. 1.
  4. On political disputes disguised as scientific debates see Leonard Susskind, The Black Hole War, Boston: Little Brown and Company, 2008, 445-446.
  5. Roy Scranton, Learning to Die in the Anthropocene, San Francisco: City Lights Books, 2015, p. 14.
  6. Elizabeth Kolbert, The Sixth Extinction, New York: Henry Holt and Company, 2014,and Field Notes from a Catastrophe, New York: Bloomsbury, 2006 (2015).
  7. See generally Edward O. Wilson, The Future of Life, New York: Alfred A. Knopf, 2002.
  8. Alasdair Wilkins, “The Last Mammoths Died Out Just 3,600 Years Ago,  But They Should Have Survived,” March 25, 2012).
  9. Gray cites this term to Wilson’s O. Wilson in Consilience, New York, Alfred A. Knopf, 1998. Apparently Wilson also denies “that humans are exempt from the processes that govern the lives of all other animals.”  Wilson uses the similar term Eremocene in Half-Earth, p. 20.
  10. Edward O. Wilson, The Future of Life, p. 29. 
  11. See Karl Popper’s essay “A Relist View of Logic, History, and Physics” in Objective Knowledge, Oxford: Clarendon, 1979 (revised ed.), 285.
  12. George Kennan, Around the Cragged Hill, New York: W.W. Norton & Company, 1993, p. 142.
  13. See Karl Popper: “Science: Conjectures and Refutations,” Conjectures and Refutations, New York: Basic Books, Inc., pp. 33-65.  Evolution is an example of an idea that may be true but is not, strictly speaking science (although it contains scientific elements that can be tested via experimentation).  Gray himself makes the point that many non-scientific ideas are often of great importance.  Straw Dogs, pp. 20-23.
  14. On stable equlibria and tipping points, see generally, Per Bak, How Nature Works, Springer-Verlad New York, Inc., 2006.
  15. For Gray’s perplexing views of conscious and artificial intelligence, see Straw Dogs, pp. 187-189.                                                                                                            We do not even know what consciousness it.  It is therefore remarkable that Gray can assert that machines “will do more than become conscious. They will become spiritual beings spiritual beings, whose inner life whose conscious thought is no more limited by conscious thought than ours.”                                     Leaving aside weasel words like “spiritual,” it seems likely that if machines ever do become conscious, it will be the result of an uncontrolled emergent process (the way that consciousness arose as a natural phenomena), and not the product of technological progress along the current lines of algorithms and hardware.                                                                                                   Consciousness appears to be the result of the physical (biological/electrochemical) processes of the brain.  As anyone who has know someone with a brain injury, mental illness, or Alzheimer’s disease knows, to the degree that the brain is damaged, diseased, damaged, or otherwise diminished, the mind diminishes correspondingly, if unpredictably.  And yet like all phenomena emerging from more primal categories, the mind is not fully reducible to physical processes.  The objections to the reduction of consciousness to “mechanical principles” made by Leibniz in his Monadology, are as alive and well today as they were in 1714.  See G.W. Leibniz’s Monadology, An Edition for Students, University of Pittsburgh Press, 1991, Section 17, pp. 19, 83-87.
  16. For Gray’s prescription for the human predicament, see Straw Dogs, pp. 197-199.  His idea of “the true objects of contemplation” and his “aim of life as simply to see” are sensible if austere goals toward greater intellectual and psychological honesty, and are reminiscent of Nietzsche’s idea of “forgetfulness” expounded in Section 1 his “On the Uses and Disadvantages of History for Life.  But where Nietzsche advocates animal forgetfulness to allow people the freedom to act forthrightly and without inhibitions, Gray believes that action only makes contemplation possible and that the real goal is understanding without myths, false self-awareness, and the illusion of meaning.  See Untimely Meditations, Cambridge University Press, R. J. Hollingdale, trans., 1983 [1874] pp. 60-67.    As regards the environmental crises, Nietzsche’s prescription would allow for action (although action without historical memory would seem to be a recipe for catastrophe as a basis for policy), where Gray would allow for a dispelling of illusions for which others might allow for meaningful action even if Gray does not believe it is possible.  His idea also has a curious, if inverse relationship to that of Roy Scranton in Learning to Die in the Anthropocene.
  17. Oliver Wendell Holmes, Jr.  
  18. Charles Dickens, A Christmas Carol.
  19. Malthus speaks of the leveling of population to match resources, p. 61.
  20. By “closed” I mean deterministic.  See generally, Karl Popper, The Open Universe, London: Routledge, 1982.                                            In a closed universe, all events are determined and may perhaps exist in the future if time as characterized by Einstein’s block universe model is correct.  As Popper observes, in a closed universe, every event must be determined where “if at least one (future) event is not predetermined, determinism is to be rejected, and indeterminism is true” (p. 6).  In a closed universe there is chaos (deterministic disorder), and in an open universe there is randomness (objective disorder), and therefore the possibility of novelty and freedom.
  21. This analogy was suggested to me by David Isenbergh.
  22.  See Chapter 10,“Fecundity,” in Pilgrim at Tinker Creek, New York, HaperCollins, 1974, pp. 161-183.
  23. For instance, see his reply to Jedediah Purdy in the January 11, 2016 number of Boston Review.
  24. See generally, Robert D. Kaplan, The Coming Anarchy, Shattering the Dreams of the Post Cold War, New York: Random House, 2000.
  25. For instance see Thomas Cahill’s popular history How the Irish Saved Civilization, and Barbra Tuchman’s chapter “Is this the End of the World: The Black Death,” in A Distant Mirror.
  26. Wilson, The Future of Life, 27.
  27. The Open Society and Its Enemies,  Princeton University Press, 2013 [1945], p. xliv;
  28. William Tecumseh Sherman, letter to James M. Calhoun, et al.  September 12, 1864.  Sherman’s Civil War, Selected Correspondences of William T. Sherman, 1860-1865, Brooks D. Simpson and Jean D. Berlin, eds., Chapel Hill: University of North Carolina Press, 1999, pp. 707-709.
  29. According to Jane Jacobs, the way that healthy economies arise is through the naturalistic grown based on the natural and human resources of a region and import-shifting cities.  This cannot be forced or created as a part of a top-down plan (unless it is to simply rebuild existing systems as with the Marshall Plan after WWII).  See generally Jane Jacobs, Cities and the Wealth of Nations, New York: Random House, 1984                                                                                                    The idea of correcting economic imbalances through structural remedies would probably make bad situations even worse.  My reading of historical events like the Russian Revolution and the period following the Chinese Civil War is that attempts to redistribute wealth only standardizes misery outside of the rising clique, the new elites.  As David Isenbergh observes, power concentrates, and when it does, the new elites tend to act as badly as the old ones.  This is one reason why Marxism—although insightful in its historical observations—fails utterly in its prescriptions.
  30. As the late Tony Judt observes, “[t]here may be something inherently selfish about the social service state of the mid-20th century.  Blessed with the good fortune of ethnic homogeneity and a small, educated population where almost everyone could recognize themselves in everyone else.” SeeTony Judt’s Ill Fares the Land, New York: The Penguin Press, 2010.
  31. The analogy of a world dominated by ants or termites was suggested to my by David Isenebrgh.
  32. See Carl Saffina, Beyond Words, What Animals Think, New York: Henry Holt and Company, 2015, and Bernard Heinrich, Mind of the Raven, New York: HarperCollins, 1999.  See also Frans De Waal, Are We Smart Enough to Know how Smart Animals Are?, New York: W.W. Norton & Company, 2016, and Mama’s Last Hug, New York: W.W. Norton & Company, 2019.     
  1. On cellular intelligence, see James Shapiro, Evolution, a View from the 21st Century, Saddle River, NJ: FT Press, 2011. On symbiosis, see Lyn Margulis, Symbiotic Planet, New York: Basic Books, 1999.  Elizabeth Kolbert, The Sixth Extinction, New York: Henry Holt and Company, 2014,and Field Notes from a Catastrophe, New York: Bloomsbury, 2006 (2015).
  2. For instance, see generally Edward O. Wilson’s The Future of Life, New York: Alfred A. Knopf, 2002, pp. 22-41.
  3.  See note 2.   
  4.  For instance, Purdy states that “Wilson is in the minority of evolutionary theorists in arguing that human evolution is split between two levels of selection: individual selection, which favors selfish genes and groups.”  I have not polled evolutionary scientists about whether or not they accept multi-level evolution, but it is safe to say that it is the not a radical idea of an apostate minority.  Although not embraced by “selfish gene” ultra-Darwinists, multi-level selection a widely-accepted idea among evolutionary biologists sometimes called “naturalist” Darwinists (see generally Niles Eldridge, Reinventing Darwin, 1997.  See also Stephen Jay Gould, The Structrue of Evolutionary Theory, 2002).  Multi-level selection was first speculated on by Darwin himself and finds its origins in The Dissent of Man, 1871, p. 166.  “It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over other men of the same tribe, yet that an increase in the number of well-endowed men and the advancement in the standard of morality will certainly give an immense advantage to one tribe over another.”
  5. There are formulas to predict the loss of biodiversity relative to loss of habitat, so of which decreases by smaller fractions. Edward O. Wilson, Half-Earth.  
  6. Boston Review, January 11, 2016).
  7. On the unimportance of definitions in critical discussions, see Karl Popper, Objective Knowledge, 58, 309-311, 328.
  8. See Wilson, Half-Earth, pp. 77-78.  In response to a question on this point during a discussion and book signing on November 16, 2016, David Biello gave a similar interpretation of Wilson’s perspective.  Biello book is The Unnatural World, New York: Scribner, 2016.
  9. Mark DeWolfe Howe, Justice Holmes: The Shaping Years, 1841-1870, Cambridge, Belknap Press, 1957, 154. 

The Four Categories of The Establishment

By Michael F. Duggan

In this posting, I would like to propose an integrated way of thinking about political and policy leadership and advisement in terms of categories defined by function, role, and personality type.  Although I do not subscribe to the fallacy of psychologism—reducing a person’s ideas to their mental state instead of taking the concepts on their own merits—I believe that personality type does play a role in one’s policy outlook.  I do not know if anyone has suggested a similar model, but I am not aware of any.    

Rather than examining policy outlooks on a conventional ideological spectrum from left to right (although these categories certainly fit into my scheme), perhaps we should look at them in terms of how categories of how policy outlooks exist in relative proximity to each other in terms of degree of moderate-to-severe, by categories of temperament/personality/imagination, and by type in terms of approach/function in implementing policy.  Some categories are ideologically neutral and take on the doctrinal coloration of their time and place.  Because of this, my model has elements of a scale and a spectrum.  The idea is not to look at these things in terms not entirely reducible to ideology (which it treats only as a single factor or intensifier), but rather how they function in the real world in regard to competing individuals and their policy positions. 

In policy as in business, these categories of leaders and advisors are Conventionalists, The Establishment (and Establishment Types), Mavericks, and Rogues. These categories are seldom, if ever, found in pure, unalloyed form, and they may overlap, influence, build upon, and cross-pollinate with each other, even in a single person.  There are also the multitudes of followers who also break down along these lines.  This is not a completely fleshed-out idea, but one that I am just throwing out in very nascent form.  Per usual I wrote this very quickly, so please forge any errors

Conventionalists

Conventionalists are men and women who subordinate their views to the perspectives a la mode and whose allegiance to these outlooks is judged by them as necessary in order to advance themselves and with an eye to the powers that be who promote and embody the dominant ideology of the times.

The Conventionalists are therefore careerists and credentialists, even though credentials are seen as value-neutral, instruments required to get ahead.  In periods of sensible policy outlook, these people can be constructive in that they reinforce positive trends by their numbers, if not a strong commitment to the good ideas.  They blow with the wind. 

Beyond self-interest, the view of Conventionalists is often (at least publicly) non-ideological in a negative sense (realism may also be non-ideological, but has often been very constructive in its strong belief in a desirable goal and in its result-oriented flexibility).  The Conventionalist point of view is perhaps akin to a tendency toward moral neutrality or petty sociopathy and the amoral sensibility that whatever advances one’s career is by definition good, regardless of the ethical and practical consequences.  They will adhere to failed policy if the mainstream continues to embrace it.  The driving forces in this type are the ego and its desire for power.  

In our own time the Conventionalism reinforces the mostly uniform beliefs of the Washington Consensus (see Andrew Bacevich, Washington Rules and Twilight of the American Century), or the dominant orthodoxy of both parities that subscribes to neoliberalism, economic globalization, a domestic economy founded on Big Finance and an ever-growing split between high-end and low-end services, and U.S. military hegemon and industries related to it.  By virtue of the nearly universal dominance of this outlook in the upper reaches of the government, this is also the controlling view of the Establishment in recent years.  As Andrew Bacevich and others have observed, you will not get anywhere in government today if you do not swear allegiance to this ideology (see America’s War for the Greater Middle East).  In this sense, an Establishment characterized by conformity drives the dominance of conventionalism at all levels of policy.  Individuals of this type should not be confused with lower level career government servants that are the backbone of the Federal Government and tend to avoid the political intrigues of successive administrations.

It should be noted that a good (read: loyal or compliant/cooperative) subordinate may be a genuine protégé, or he/she may be an earnest believer in a different outlook biding his/her time (as with the good soldier, the conservative William Howard Taft, during the more progressive administration of Theodore Roosevelt).  On a less positive note, he or she may equally be an opportunistic true believer playing the part of the sycophant and waiting for their time. 

The Establishment and Establishment Types

The Establishment is the governing mean, the formal and informal structural context in which these types exist and operate.  It is in principle value-neutral but always takes on the character and ideology of the people in it (today it is Neoliberal and Neoconservative; in the late 1940s, it was dominate by an outlook of moderate realism).  This is the generalized governmental temperament of a period, a slack tide of multiple perspectives into a status quo in which strong-minded individuals may divide the policy community into camps—into a majority as well as influential plurality and minority outlooks.  The dominant of these is the official view of the government, although historically, there have usually been balancing and countervailing currents.   

An Establishment representing the outlook of an administration may avail itself of Mavericks (see below) and take on the character of their ideas (e.g. the New Deal, the Marshall Plan).  As a thing-in-being, there is always an Establishment of strong players in the system, and it is highly unlikely to have an Establishment without a dominant view.  As with nature, a policy environment hates a vacuum and a strong personality or coalition will always tip an unstable equilibrium one direction or another.  On a related note, the best presidents are always at the heart of their administration, and therefore determine or heavily influence the direction of the Establishment of their times. There are always balancing elements, resistance, and cross currents from other bastions of power and estates of the sovereign whole.

Because the Establishment takes on the character of the dominant perspective, it is altogether possible to have a Maverick or even a Rogue Establishment.  The most constructive Establishments, in my opinion, are those that utilize constructive/innovative ideas of Mavericks (as with the New Deal—Roosevelt was both a Maverick and Establishment Type who listened to and employed the energies of many Maverick public servants).  In terms of historical context, it is tempting—at least for me—to measure the Establishment of prior and successive periods by the baselines of the social democratic domestic Establishment of 1933-1970 (or thereabout), and the foreign policy and military Establishment of 1939-1950 (or thereabout). 

It significant that dominant leaders and the Establishments they head vary with the policy context and situational dictates of the time.  A sensitive leader intuitively divines what political approach is called for and then attempts to fulfill those needs in terms of leadership, management, and policy/goals.  Over the long course of political history there haven been Bringers of Order (Charlemagne, Alfred the Great, other notable leaders of the late Dark Ages that allowed for the comparative order of Later Middle Ages), Caretakers/Preserves of the Status Quo (most of the U.S. presidents between Lincoln and Theodore Roosevelt), Conservative Reformers (Grover Cleveland, and the early Theodore Roosevelt), Progressives (President and the Bull Moose Theodore Roosevelt, Woodrow Wilson—in a economic, if not social justice sense), and Transformers (the Founders/Framers taken as a whole, Lincoln, Franklin Roosevelt).  There is also an often corrupt category (e.g. urban political machines based on ethnicity and identity) that picks up the slack when official government is insufficient or not doing its job.

The Establishment Type

The is a distinction to be made between the Establishment Type and The Establishment.  This type tends to be temperamentally conservative but the best of them are innovators and readily embrace Mavericks and their ideas and prescriptions (e.g. George C. Marshall as Secretary of State with Kennan as the Director of The Office of Policy Planning).  They differ from Conventionalists in putting the system above themselves and in the fact that they seek to do what is right in a broader sense than simple careerism.  The best Establishment Types utilize the creativity of mavericks, and manage/contain rogues.  In bad times, Establishment Types balance and stabilize.  Under good leadership they are also a positive element. They subordinate their careers to duty and service.  Under effective leadership, they tend to rise on a basis of merit rather than credentials.          

Mavericks

Mavericks are the idea men and women—intellectuals—and may be practical or impractical (or even utopian), constructive or pernicious.  The best of these are Cassandras and Jeremiahs who do not rely on theories so much as insight and may design doctrines of their own, but may equally be the worst of true believers touting rigid ideology and dogma.  The former are the intuitive creative types who see things before others do and more accurately and are able to effectively plan accordingly.  More generally defined, Mavericks can be vigorous and influential intellectuals of any ideological stripe.  In some instances they may embody the cutting edge of the zeitgeist of the times, but may come to be regarded as ambiguous or even harmful in a larger historical context and retrospect (e.g. the navalist historian, Alfred Thayer Mahan).  

Mavericks are weighed in terms of the effectiveness of their policy prescriptions. In an administrative sense, Mavericks are measured in the degree of their influence as well as their distance from the previous status quo of the Establishment and the centrality of their role in creating a new one.  This is why a moderate realist like George Kennan, who had studied history and knew what worked in the past and what did not, and why, is as much as a maverick as the first Neoconservatives, who were true believers in a theoretical ideology with questionable historical antecedents.  Kennan’s influence contributed to a moderate, if short lived Establishment that was quickly supplanted by more ideological mavericks like Nitze.

The best Mavericks are insightful creative types who “think outside-of-the-box” (to use an inside-the-box cliché) and devise imaginative policy solutions.  The worst are true believers or else cynics implementing the desires of powerful interests both inside and outside of government.

The “Good” Maverick

“Good Mavericks tend to be high-minded realists who see each new situation with fresh eyes and without assumptions other than a broad and deep base of intimate and formal historical knowledge.  Some are outsiders who made it on merit (Hamilton, Kennan).  This type of advisor may seem to be inconsistent by unimaginative Conventionalists and bad or “Malignant” Mavericks when they (Good Mavericks) prescribe different responses to superficially similar situations that are fundamentally dissimilar or when an idea or approach did not produce favorable results when first used. The Good Maverick eschews ideology, group think, and over-reliance on theories and simple formulas.  Historically they have often been a special kind of outsider who succeeded on a basis of merit and insight. To work effectively, this kind be allowed space for creativity and a free hand (as with Kennan in the Office of Policy Planning, and Kelly Johnson in his Lockheed “Skunk Works”).  We live in a time that despises constructive Mavericks.

Given the policy types I have already mentioned, it is noteworthy that in my scheme, Mavericks shake things up, where Establishment Types tend to embrace order and the status quo but may be open to new ideas.  It is possible for the dominant strata of an Establishment to be comprised of Good Mavericks co-mingled with Establishment Types (e.g. Harriman, Kennan, Lovett, and McCloy during the immediate Post-WWII era) or else true believers (e.g. John Hay, Henry Cabot Lodge, Alfred Thayer Mahan, Theodore Roosevelt, and Elihu Root, during the age of American imperialism).

It is notable that great leader, although often difficult to categorize or analyze in terms of systems and general reductions, must have qualities of the Maverick along with the balance, leadership, and management skills to direct the Establishment and lead the electorate.

The “Malignant” Maverick

These are the influential ideologues or true believers in theories who are able to influence leaders and colleagues, and influence policy and the nature/direction of the Establishment. They may do this with native charisma, force of personality, and the skills of departmental and political infighting.  They typically have a showy, if narrow and superficially impressive intellect that may dazzle and persuade. In extreme form they may become Rogues.  We live in a time in which this kind of Maverick has set the keynote for the Establishment.

Rogues

Rogues are the self-interested adventurers, the authoritarian lovers of power for its own sake and for gratification of the ego, the border-line or bona fide sociopathic businessman or woman, plutocrat, or military leader.  Rogues are a more extreme hybrid of the Careerist and the Maverick and may appear to be the latter (or, rather, the latter, unchecked may morph into an actual Rogue).  Where Mavericks may be understated or charismatic, Rogues tend to be predominantly charismatic and may be powerful demagogues.  Very often they are populist juggernauts.

These are people who may reach a position where they can defy the Establishment unless and until they are somehow checked or else come to dominate it.  They can be useful in time of war as a military type if pointed toward an enemy, and then kept on a short leach by a strong and well-established system (it is less clear what to do with them when the war is over).  Regardless of whether they are in business, the military, politics, or policy, they must never be allowed to take over or dominate.

Individuals can begin as Rogue insurgents and end up as Conventionalists Establishment types living off of reputations of bringers of change.

Conclusion

There you have it.  This is by no means a comprehensive list of “types” found in the Establishment: there are also Apostates—disillusioned true believers, idealists, and utopians who may go on to become strong critics of their former programs. There are Whistle-Blowers, a hugely important category that is even more universally despised these days than the Good Maverick. Most obviously, as a functional category, there are Principles—presidents, senators, representatives, cabinet members, department heads, and other high-level appointees.

Finally, there is also a functional category or type that I call the Opaque Player as a working title. These are quiet, omnipresent high-level advisors of the inner circle who may be team players or self-interested individuals (this may be the type that Henry Adams characterizes as masters of the game for the sake of the game, but may equally be loyal and dedicated public officials). In some cases their true beliefs and motives are unknown outside of their immediate circle and sometimes are not fully known even there. Some of this kind do not show their ideological hand publicly. They may be great public servants, true believers, or low-key, high-level adventurers or even careerists. Regardless of motives,these are typically the smartest people in the room (and in the Establishment generally) and may be a handler of a president or else a henchman or a behind-the-scenes whip or button pusher on his/her behalf. They know how to “work the system” and get things done and may be more responsible for implementing a program or agenda than the president him/herself. They may be a Chief of Staff or a personal/unofficial advisor of the highest level in the executive. This is the type who has the ear of the leader and in most administrations, this type of person is one of the few who is able and positioned to speak the unvarnished truth to his/her boss. They are able to deliver bad news to the president. Not elected, they may be the most powerful people in the government in a practical sense and under a weak leader may be a de facto chief executive. Examples (and as someone who is not a scholar of early modern history, I am not confident of these) may include Thomas Cromwell and Thomas Wolsey. In our own tradition, Elihu Root may be an example of this type. Power is fluid in a robust system, and this type may be far broader an less apparent than suggested by this definition. There is also a lover level version of this kind that may act as a personal emmisary, lobbyist, or representative, of the president (Thomas Corcoran might be an example).

I am not sure whether this scheme holds any water or if I have even interpreted my own ideas correctly or applied them accurately in terms of analyzing historical leaders and advisors (below).  It is still a very nascent work in progress and I just wanted to get it out there for the consideration of others.  Again, I wrote this very quickly, so please excuse any/all mistakes.

Historical Examples

In order to flesh-out these categories beyond mere criteria, consider the historical examples below.  This is nothing more than a shot-from-the-hip scattershot of opinion.

  • Theodore and Franklin Roosevelt were Maverick presidents who set the tone of the Establishment of their time.
  • George C. Marshall and Dwight D. Eisenhower were military Establishment types. An imaginative combat commander like Matthew Ridgway was Good Maverick subordinate to them. 
  • Churchill had characteristics of a Rogue, Maverick, and a conservative imperial Establishment Type.
  • The dynamic combat officers, Curtis LeMay, Douglas MacArthur, and George Patton, were extreme, frequently effective military Mavericks bordering on Rogues.  MacArthur was a cooperative Maverick Establishment Type during the rebuilding of Japan but became something like a partisan Rogue during the final phases of his command in Korea (he did return to the Untied States after being relieved, so he still acknowledged authority above him).
  • J. Edgar Hoover was a pernicious Rogue who devised a departmental Establishment that exerted influence over the entire government.
  • Huey P. Long was a populist Rogue of state government and within the Democratic Party.
  • Robert Moses was an Establishment Rogue of the New York Port Authority.  
  • Joseph McCarthy was a cynical careerist-turned-Rogue.
  • Lyndon Johnson seems to have elements of all of the above categories (except perhaps the Conventionalist).  He availed his administrations of Mavericks and Establishment Types.
  • Richard Nixon was an odd combination of a highly individualized (almost outsider), hardball Establishment Type who became an unhinged Rogue who, at the end of his administration, had sufficient control to resign.
  • Napoleon was a strange amalgam of an adventurer, idealist, and realist that gives him qualities of a Maverick, Rogue, and a creator of an establishment.  One problem with a leader who rules by force of personality (other examples would be Cromwell and Castro) is that the system they put in place is difficult to sustain after them, thus creating problems of succession. 
  • Hitler was the most pernicious of Rogues. He created and presided over a regime based on an extreme crackpot ideology, ethnic phobia, myths of racial warfare, and bad science. The Weimar Republic before him was a weak and ineffectual Establishment.
  • Fidel Castro was a popular rebel who became a Rogue under the guise of a utopian revolutionary.
  • Josef Stalin, Mao: pernicious utopian Rogues.
  • Howard Hughes was a good Maverick business type.
  • Preston Tucker was a Good Maverick business type.
  • Jane Jacobs was a Good Maverick independent intellectual. ,
  • George Washington was an Establishment Type who devised the role of the president and demonstrated a Cincinnatus-like respect for the system by voluntarily relinquishing power at the end of two terms.  His key advisor, Alexander Hamilton, was the prototype American Maverick advisor.
  • Cromwell was a Rogue and Charles II was Establishment (here we can see outlook driving the respective roles). 
  • Bismarck was a conservative Maverick who created a domestic social welfare state and a military Establishment that only he could control. 

A Few Words on a Few Words (or “Hey You Kids: Get Your Neologisms off My Lawn!”)

Michael F. Duggan

At one level or another, every wordsmith is a curmudgeon about usage.  I will leave it to others to determine whether or not I qualify as a wordsmith, but it is certainly not beyond me to be a curmudgeon on some topics. There are people who can discourse at length about why the Webster’s International Dictionary 2nd ed. is superior to previous and subsequent editions, or why the Elements of Style is “The Bible.”  More generally everybody who writes or reads has favorite and least favorite words and preferred/least preferred usage.  Similarly, some of us have words and usages that are fine in some contexts but insufferable in others.  

There are pretentious neologisms, self-consciously trendy or generational hangnails, unnecessarily technical social science or other academic jargon that has crept into the public sphere (don’t get me started about Derrida and Heidegger), and the overuse and therefore the tweaking of existing words.  Below is a partial list of words and phrases that appeal to me in a similar sense as fingernails on a chalkboard.  This posting is written in a tone of faux smugness/priggishness and is not intended to be mean, so please do not take it to heart if you have ever used or otherwise run afoul of any of the offending terms. Below that is a slightly hysterical rant/grouse/essay I wrote a year or two ago about the recent appropriation of the word “hipster.” 

Enjoy (if that’s the right word).

  • All you need to know about… Click bait for people who want to know the bullet points on a popular or topical issue.
  • Begs the question. This is a term correctly used in logic and forensics to describe an argument or reply that avoids addressing or answering the issue at hand.  Today you will likely hear it on the news meaning something like “suggests,” “poses,” or “implies the question…” as in the statement: “The result of today’s election begs the question of whether the nation is suffering from mass psychosis.”
  • Cool. A ubiquitous, burned out synonym for “good” or “desirable” in a context of modern pop culture conformity. A common term of reverse snobbery indicating approval and therefore social acceptance among “cool” people (including the speaker) that is mostly identical to the post-1990s use of the world “hip” (see rant below).  Like “hip,” it was once a rebellions alternative to more conventional terms of approval like “good.” Unless I am describing to a day below 60 degrees, soup that has sat around too long, or a certain kind of modern jazz, I am attempting—mostly unsuccessfully—to wean myself off of this insipid, reflexive word. It is still preferable (and more durable) than the more dated groovy.
  • DMV. Local Madison Avenue-esque abbreviation for the “District of Columbia, Maryland, Virginia” region. I think of it as representing the “Department of Motor Vehicles.” If I ever become hip (modern usage) enough to voluntarily use this term, I hope that I will be struck by a large Motor Vehicle immediately thereafter.
  • Fetishize. Verb form of fetish—to make something the object of a fetish. To abnormally or inappropriately ascribe more importance or interest to a thing than is necessary or deserved. Fetishize is commonly used by people who fetishize words like “fetishize.”
  • Icon/Iconic Good words in traditional usages (e.g. medieval religious portraiture).  In the modern popular and corporate media, the new meaning is something like: a thing or person once fresh, original, and important, now reduced to an instantly recognizable cliché or a symbol mostly drained of any content, substance, or meaning.
  • I’m a survivor. A perfectly good phrase, but only if volunteered modestly (i.e. not as a boast) in the course of conversation and if the user has survived a cataclysmic event.
  • Is that a thing? A more diffuse way of saying “Is that a real trend?” or “Is that something people actually do?”
  • Juxtaposition.  Use sparingly.  Otherwise it suffers from some of the complaints against “paradigm.”
  • Narrative. A term borrowed from literary criticism and academic history departments meaning a particular ideological or personal explanation or interpretation.  Often used to disparage or call into question an interpretation by implying a self-serving, or subjective account (or that there are no “objective” accounts).  Instead of “narrative,” I prefer “interpretation” or “explanation” as less loaded alternatives.  Explanations should be examined for their truth content and not dismissed solely because of an implied perspective or the implicit state of mind of the narrator (an error of analysis known as psychologism).
  • No worries. This term obviously means “Don’t worry about it” or “No big deal/problem.” Appropriated from the Aussies around or just before the turn of the twenty-first century. Do not use unless you are Australian and only if followed by “mate.”
  • Paradigm/Paradigm Shift/Paradigmatic. A term that crept out of the philosophy of science of Thomas Kuhn (and a variation on ideas of Karl Popper and others).  A favorite word of hack academics and others trying to sound smart (see “juxtaposition”).  Outside specific academic usage, one should probably avoid this word altogether (and even when writing technically, “frame” or “framework” are less pretentious and distracting).  If a person puts a gun to your head and commands you to use the adjective form, try “paradigmic”  I don’t know whether or not it is a real word, but it is still better than “paradigmatic,” arguably the most offensive word in modern English (and your example might help start a trend for others under similar duress).
  • Reach[ing/ed] out to… Just call the guy; reaching out to him doesn’t make you a better person any more than “passing away ” makes you any less dead than someone who has simply died.
  • So… A horrible word when said slowly and pronounced “Sooo…” at the beginning of a spoken paragraph or conversation.  An introductory pause word common among people born after 1965. A word the provides an opportunity to sound both didactic and flaky at the same time. A person who uses “So…” this way throughout all but the shortest of conversations can make some listeners from previous generations want to throw a heavy object at the nearest wall.
  • Spiritual/Spirituality. A word commonly (and confidently) thrown down as a solemn trump card in discussions on metaphysics but which means nothing more than a vaguer form of “religiosity” without a commitment to specific beliefs. An ill-defined projection of a speaker’s personality into the realm of metaphysics. The result of one who wants to believe in something otherworldly when exiting belief systems are found wanting or are unacceptable whole cloth. An imprecise word whose imprecision gives it a false authority or gravitas when any number of more precise words from philosophy, psychology, or theology would suffice (e.g. animism, cosmology, deism, epiphany, exaltation, inspiration, pantheism, paganism, theism, transcendentalism, and the names of specific religions, etc.). Although the definition of words is seldom important in good faith critical discussions, one should always ask for a concise definition of spirituality whenever it comes up in conversation. Note: there may be a narrow context or range of usage where this word is appropriate, such as referring to a priest or minister as a spiritual advisor.
  • Talk About. A favorite, if inarticulate, invitation of radio and television interviewers with insufficient knowledge or information to ask actual questions, thus allowing interviewees to spin things in a way that is favorable to their perspective (e.g. “Your company is responsible for the recent catastrophic oil spill. Talk about the safety precautions it has put in place since the disaster.”).
  • Technocrat. The problem with this term is that, like “hipster” (again, see below), it has two related but substantially different meanings. To those under 40, it typically refers to a person belonging to technical or technological elite who are blind to all but technological solutions to all of the nation’s and the world’s problems. As such it is a perfectly good–if overused–term of derision against an arrogant class. The issue I have is that there is an older definition meaning simply a specialized public servant. If Benjamin Cohen, Thomas Corcoran, Harry Hopkins, Harold Ickes, George F. Kennan, John McCloy, George C. Marshall, and Frances Perkins are “technocrats,” then I have nothing but admiration for many people covered by this older usage.
  • Text. A noun meaning a work or a portion of writing by a given author.  It is pretentious as hell, and I believe an inaccurate word.  Human beings do not read text; we read language.
  • Thinking outside of the box. An inspirational “inside the box” cliche expressing a good idea: not being bound by a an limiting conventional framework (or, in the narrow and correct usage in science/philosophy of science, a paradigm). Science progresses by advancing to a point where it smashes the existing frame (e.g. Relativity superseding the Newtonian edifice in the early twentieth-century). Ironically, this term is often used by conventionalist businessmen/women who somehow think of themselves as mavericks and innovators. A term favored by motivational speakers and other manic careerist types and their adjuncts.
  •  To be sure. A common infraction even among important historians and social commentators when conceding a point they consider to be unimportant to their overall argument (usually at the start of a paragraph).  It was fine in Britain 100-150 years ago, but is hard to stomach today because of severe overuse.  Consider instead: “Admittedly,” “Certainly,” “Of course,” “Albeit” (sparingly), and other shorter and less pretentious terms.
  • Trope. An okay word that is overused.
  • You as well. A less efficient way of saying “You too.” A classic case of middle class syllable multiplication (see Paul Fussell’s Class). I think people use this to mix things up rather than rely solely on the less satisfying “You too.” Unconsciously, people might think that a simple sentiment may be made somehow more interesting by expressing it with more words/syllables (e.g. using “indeed” rather than “yes” in agreement). In a similar sense, syllable multiplication gives the illusion of adding content.
  • You’re very welcome. A mirror reply to “Thank you very much.” Common among people under 40, it may be used earnestly, reflexively, or to mock what the young perceive to be the pretentious hyperbole of older people who have the unmitigated gall to add the intensifier “very” when a simple “thank you,” “thanks, ” or understated nod would suffice. Even in a time when “very” is very much overused, one should take any sincere variation of “thank you” for how it was intended—as a gift of civility and etiquette freely offered—and a mocking or mildly sarcastic reply of “you’re very welcome” is at least as smug as this blog posting.

Finally, there is a much-maligned word that I would like to resurrect or at least defend: Interesting. If used as a vague and non-committal non-description or non-answer, it should be avoided unless one is forced into using it (e.g. when one is compelled by circumstances to proffer an opinion or else be rude or lie outright; in this capacity, the guarded “interesting” never fools anybody). However, for people who like ideas and appreciate the power and originality of important concepts, “interesting ” can be used as an understated superlative—a quiet compliment, a note of approval or admiration that opens a door to further explanation and elaboration.

Essay: On the Hip and Hipsters

Present rant triggered by a routine stop at a coffee shop. 

I appreciate that language evolves, that the meanings of words change, emerge, disappear, diverge, procreate, amalgamate, splinter-off, become obscure, and overshadow older meanings, especially in times of rapid change.  I am less sanguine about words that seem to be appropriated (and yes, I know that one cannot “steal” a word) from former meanings that still have more texture, resonance, authenticity, and historical context for me.

For example over the past decade (1990s?) the word “hipster” has taken on a new—in some ways inverse—but not unrelated meaning to the original. The original meaning (to my knowledge) of “hipster” was a late 1930s-1950s blue collar drifter, an attempted societal drop-out, a modernist cousin of romantic hero, and borderline antisocial type, who shunned the “phoniness” of mainstream life and commercial mass culture and trends and listened to authentic (read: African-American) jazz—bebop—(think of Dean Moriarty from On the Road). 

He/she was “hip” (presumably an evolution of 1920s “hep”)—clued-in, disillusioned—to what was really going on in the world behind the facades and appearances (and not today’s idea of “hip” as being in touch with current trends—an important distinction). The hipster presaged the beat of the later 1950s who was more cerebral, contrived, literary, and urban. In the movies, the male of the hipster genera might have been played by John Garfield or Robert Mitchum. In real life, Jackson Pollock will suffice as a representative example. Hipsters were typically flawed individuals and were often irresponsible and failures as family people. But at least there was something authentic about them.

By contrast, today’s “hipster” seems to be self-consciously affected right down to the point of his goateed chin: consciously urban (often living in newly gentrified neighborhoods) consciously fashionable and ahead of the pack, dismissive of non-hipsters (and quiet about his/her middle-to-upper-middle class upbringing in the ‘burbs and an ongoing childhood once centered around play dates), a conformist to his generational dictates.  Today’s hipster embodies the calculation and trendiness that the original hipsters stood against (they were noticed, not self-promoted).  Admittedly, hip talk was adopted by the Beats and later cultural types and elements of it became mainstream and then fell out of favor (as Hemingway observed “…the most authentic hipster talk of today is the twenty-three skidoo of tomorrow…”).

I realize that this might sound like a “kids these days” grouse or reduction—and I hope it is not; upon the backs of the rising generation ride the hopes for the future of the nation, species, and the world. I have known many young people–interns and students–the great majority of whom are intelligent, serious, thoughtful, and oriented toward problem solving and social justice. There seems to be a strong current toward rejecting the trends of previous generations among them. The young people these days have every right to be mad at what previous generations have done to the economy and the environment and perhaps the hipsters among them will morph into something along the lines of their earlier namesake or something considerably better.

If not, then it is likely that the word will continue to have a double meaning as the original becomes increasingly obscure or until another generation takes it up as its own.

The Wisdom and Sanity of Andrew Bacevich

Book Review

By Michael F. Duggan

Andrew J. Bacevich, Twilight of the American Century, University of Notre Dame Press, 2018.

What do you call a rational man in an increasingly irrational time?  An anomaly?  An anachronism?  A voice in the wilderness?  A faint glimmer of hope? 

For those of us who devour each new article or book by the prolific Andrew J. Bacevich, his latest book Twilight of the American Century—a collection of his post-9/11 articles and essays (2001-2017)—is not only a welcome addition to the oeuvre but something of an event.  In these abnormal times, Bacevich, a former army colonel who describes himself as a traditional conservative, is nothing short of a bomb-thrower against the the Washington Consensus.  Likewise the ominous title of the present collection does not look out of place among the apocalyptic titles of a New Left history professor (Alfred W. McCoy/In the Shadows of the American Century), an apostate New York Times journalist flirting with bottom-up Marxism (Chris Hedges/America the Farewell Tour), and an economics professor from Brandeis (Robert Kuttner/Can Democracy Survive Global Capitalism). 

The new book was worth the wait.    

A collection by an author with broad, deep, and nuanced historical understanding, Twilight of the American Century lends powerful insight over a wide territory of issues, events, and personalities.  The brevity of these topical pieces makes it possible to pick up the book at any point or to jump ahead to areas of special interest to the reader.  Bacevich, a generalist with depth and a distinctive voice, offers what is without a doubt the freshest and most sensible take on foreign policy and military affairs today.

In terms of outlook, Professor Bacevich harkens back to a time when “conservatism” meant Burkean gradualism—a cautious and moderate outlook advocating terraced progress over the jolts and whipsaw of radical change and destabilizing shifts in policy.  This perspective is based on a realistic understanding of human nature, that people are flawed and that traditions, the law, strong government, and the balancing of power are necessary to accommodate—to contain and balance—the impulses of a largely irrational animal and what Peter Viereck called its “Satanic ego.”  

As regards policy, traditional (read “true”) conservatism is fairly non-ideological.  It holds that rapid fundamental change results in instability and eventually violence.  Those who have studied utopian projects or events like the Terror of the French Revolution, the Russian Revolution, or the Cultural Revolution realize that this perspective might be on to something.  Traditional conservatives like Viereck, believe that a nation should keep those policies that work while progressing gradually in areas in need of reform.  They also embrace progressive initiatives when they appear to be working or when a more conservative approach is insufficient (Viereck supported the New Deal).  The question is whether or not gradualistic change is even possible in a time of great division in popular politics and lockstep conformity and conventionalism among the members of the Washington elite. 

From his shorter works as well as books like The Limits of Power, Washington Rules, and America’s War for the Greater Middle East (to name a few) one gets two opposite impressions about Bacevich and his perspective.  The first is that he never abandoned conservatism, it abandoned him and became something very different—a bellicose radicalism of the right—that is odious to true conservatives.  The second is more personal, that, like a hero from the Greek tragic tradition, he realized in midlife that what he believed to be true was wrong.  At the beginning of his brutally honest and introspective introduction to the present book, he writes:

“Everyone makes mistakes.  Among mine was choosing at age seventeen to attend the United States Military Academy, an ill-advised decision made with little appreciation for any longer-term implications that might ensue.  My excuse?  I was young and foolish.”

The implication of such a stark admission is that when one errs so profoundly, so early in life, it puts everything that follows on a mistaken trajectory.  While this seems to be tragic in the classical sense (and is certainly “tragic” in more common usage as a synonym for catastrophic), it also appears to be what has made Bacevich the powerful critic he has become: to the wise, truth comes out of the realization of error.  His previous “erroneous” life gives him a template of uncritical assumptions against which to judge the insights hard bought through experience and independent learning after he arrived at his epiphany, his moment of peripetia.  The “mistake” (more like an object lesson of harsh self-criticism) and the realizing of it with clarity of vision and disillusioned historical understanding made him the superb critic he has become (and to be frank, his career as an army combat officer gives him certain “street creds” that cannot be easily dismissed and which he could not have earned elsewhere).  It seems unlikely that Bacevich would have happened on his current perspective as just another academic.  

One can only speculate about whether or not he makes the truth of his early “error” out to be more tragic than it really is.  A more charitable reading is that this admission casts him as the hero in a Popperian success story of one who has taken the correct lessons from his experience.  One can hardly imagine a more fruitful intellectual rising from a midlife crisis.  It is also difficult to imagine how he would have arrived at his depth as a mature commentator via a more traditional academic route.  But I draw close to psychologizing my subject.

In order to be a commentator of the first rank, a writer must know human nature—its attributes as a paragon among animals, its foolishness, its willfulness, its murderous irrationality—and must have judgment and a sense of circumspect that comes from historical understanding.  You must know when to criticize and when to forgive, lest you become mean.  Twain was a great commentator because he forgives foibles while telling the truth.  Mencken is sometimes mean because he does not always distinguish between forgivable failing or weakness and fault and excuses himself from his spot-on criticism of others. 

An emeritus professor at Boston University, Bacevich knows history as well as any contemporary public intellectual and much better than most.  His historical understanding far exceeds that of the neocon/lib critics and policymakers of the Washington foreign policy Blob.  He carries off his criticism so effectively, not by a lightness of touch, but by frank honesty.  It is apparent from the first line of the book that he holds himself to the same standards and one senses that he is his own toughest critic—his introduction is self-critical to the point of open confession.  Bacevich is tough, but he is one of those rare people who is able to keep himself unblinkingly honest by not exempting himself from the world’s imperfections. 

He dominates polemics then, not by raising his voice, but by reason and clear vision, sequences of surprising observations and interpretations that expose historical mythologies, false narratives, and mistaken perceptions, with an articulate and nuanced, if at times dour voice.  Frank to the point of bluntness, he calls things by their proper name and has what Hemingway called “the most essential gift for a good writer… a built-in, shockproof, bullshit detector” the importance of which goes double if the writer is a historian.  In less salty language, and in a time where so many commentators tend to defend questionable positions, Bacevich’s articles are a tonic because he simply tells the truth. 

In his review of Frank Costigliola’s The Kennan Diaries, he seems to flirt with meanness and overkill, but perhaps I am being oversensitive.  Like many geniuses—assuming that he is one—Kennan was a neurotic and eccentric, and it is all-too easy to enumerate his many obvious quirks (if we judge great artists, thinkers, and leaders by their foibles and failures, one can only wonder how Mozart, Beethoven, Byron, van Gogh, Churchill, Fitzgerald, and Hemingway would fare; even the Bard would not escape whipping if we judge him by Henry VIII).  As a shameless Kennan partisan who tends to rationalize his personal flaws, perhaps I am just reacting as one whose ox is being gored.  I am not saying that Bacevich gets any of the facts wrong, only that the interpretation lacks charity.  

This outlining of Kennan’s shortcomings also struck me as ironic and perhaps counterproductive in that Bacevich is arguably the closest living analog or successor to Mr. X. as a commentator on policy, both in terms of a realistic outlook and in the role of historian as a Cassandra who is likely to be right and unlikely to be heeded by the powers that be.  Both fill the role(s) of the conservative as moderate, liberal-minded realist, historian as tough critic, and critic as honest broker in times desperately in need of correction.  As regards temperament, there are notable differences between the two: Bacevich strikes one as a stoical Augustinian Catholic where Kennan, at least in his diaries, comes across as a Presbyterian kvetch and perhaps a clinical depressive.  Like Kennan too, Bacevich is right about many—perhaps most—things, but not about everything; perfection is too much to ask of any commentator and we should never seek out spotless heroes.  The grounded historical realism and clear-sighted adumbrate of both men is immune to the seduction of bubbles a la mode, the conventionalist clichés of neoliberalism and neoconservatism.

The book is structured into four parts: Part 1. Poseurs and Prophets, Part 2. History and Myth, Part 3. War and Empire, and Part 4. Politics and Culture.  The first part is made up of book reviews and thumbnail character studies.  If you have any sacred cows among the chapter titles or in the index, you may find your loyalty strongly tested and if you have anything like an open mind, there is a reasonable chance that your faith will be destroyed.  Charlatans and bona fide villains as well as mere scoundrels and cranks including the likes of David Brooks, Tom Clancy, Tommy Franks, Robert Kagan, Donald Rumsfeld, Arthur Schlesinger, Paul Wolfowitz, Albert and Roberta Wohlstetter, and, yes, George Kennan, all take their lumps and are stripped of their new clothes for all to see.  Throughout the rest of the book there is a broad cast of characters that receive a similar treatment.  

This is not to say that Bacevich does not sing the praises of his own chosen few including Randolph Bourne, Mary and Daniel Beard, Christopher Lasch, C. Wright Mills, Reinhold Niebuhr, and William Appleman Williams, but here too is he completely frank and provides a full list of favorites up front in his introduction (his inclusion of the humorless, misanthrope, Henry Adams—another Kennan-like prophet, historian, and WASPy whiner—is a little perplexing).   

Where to begin?  Bacevich’s essays are widely ranging and yet embody a consistent outlook.  Certain themes overlap or repeat themselves in other guises.  He has a Twain-like antipathy for frauds, fakes, and charlatans and is adept at laying bare their folly (minus Twain’s punchlines and folksy persona).  The problem with our time is that these people have dominated and their outlooks have become an unquestioned orthodoxy among their followers and in policy circles in spite of a record of catastrophe that promises more of the same.  To read Bacevich’s criticism is to realize that things have gone beyond an establishment wedded to an ideology of mistaken beliefs and into the realm of group psychosis.  One comes away with the feeling that the establishment of our time has become a delusional cult beyond the reaches of reason and perhaps sanity.  Hume reminds us, that “reason is the slave of the passions” and it is striking to read powerful arguments that are unlikely to change anything.  If anything, Bacevich’s circumspect, clarity of vision, common sense, and impressive historical fluency seem to disprove the observation attributed to Desiderius Erasmus that “in the land of the blind, the one-eyed man is king.”  More likely, in a kingdom of the blind, a clear-sighted person will be ignored or burned as a heretic if caught.

Are there any criticisms of Bacevich himself?  Sure.  For instance, one wonders if, like a gifted prosecutor, at times he makes the truth out to be clearer than it may really be.  In this sense his brilliant Washington Rules is a powerful historical polemic rather than a purely interpretive survey (like Robert Dalleck’s The Lost Peace, which covers much of the same period).  Thus it is fair to regard him as a polemicist as well as an interpretive historian (again, this is not to suggest that he is wrong).  Also, given the imminent threat posed by the unfolding environmental crises, I found myself hoping that he would wade further into topics related to climate change—the emerging Anthropocene (i.e. issues of population, human-generated carbon dioxide, loss of habitat/biodiversity, soil depletion, the plastics crisis, etc.)—and wondering how he might respond to commentators like John Gray, Elizabeth Kolbert, Jed Purdy, Roy Scranton, Edward O. Wilson, and Malthus himself. 

The only other criticism is that Bacevich is so prolific that one laments not finding his most recent articles among the pages of the present collection.  This is what is known as a First World complaint.

Unlike a singular monograph, there is no one moral to this collection but a legion of lessons: that events do not occur in a vacuum—that events like Pearl Harbor, the Cuban Missile Crisis, and 9/11, and the numerous U.S. wars in the Near East all had notable pedigrees of error—and that bad policy in the present will continue to send ripples far into the future; that the stated reasons for policy are never the only ones and often not the real ones; that some of the smartest people believe the dumbest things and that just because you are smart doesn’t necessarily mean that you are sensible or even sane; that the majority opinion of experts is often wrong; that bad arguments sometimes resonate broadly and masquerade as good ones and that without a nuanced understanding of history it is impossible to distinguish between them.  If there is a single lesson from this book it is that the United States has made a number of wrong turns over the past decades that have put it on a perilous course on which it continues today with even greater speed.  Thus the title. 

In short, Bacevich, along with Barlett and Steele, and a number of other commentators on foreign policy, economics, and the environment, is one of the contemporary critics whose honesty and rigor can be trusted.  As a matter of principle, we should always read critically and with an open mind, but in my experience, here is an author whose analysis can be taken as earnest, sensible, and insightful.  He is also a writer of the first order.

My recommendation is that if you have even the slightest feeling that things are amiss in American foreign affairs, or if you are simply earnest about testing the validity of your own beliefs, whatever they are, you should read this book.  If you think that everything is fine with the nation and its policy course, then you should buy it today and read it cover to cover.  After all, there is nothing more dangerous than an uncritical true believer and we arrive at wisdom by correcting our mistaken beliefs in light of more powerful arguments to the contrary.  

A Wonderful Life?

By Michael F. Duggan

 For the past few years, I have posted a version of this essay around this time of year.  Having just watched the movie last night, here it is again.  

I have always loved the 1947 Frank Capra seasonal classic It’s a Wonderful Life, but have long suspected that it is a sadder story than most people realize (in a similar but more profound sense as Goodbye Mr. Chips).  One gets the impression from the early part of the movie that George Bailey could have done anything, but was held back at every opportunity.  Last year, after watching it, I tried to get my ideas about the film organized and wrote the following essay.

In spite of its heart-warming ending, the 1947 Christmas mainstay by Frank Capra, It’s a Wonderful Life, is in some ways a highly ambiguous film and likely a sad story. George Bailey, the film’s protagonist played by Jimmy Stewart (in spite of his real-life Republican leanings), is the kind of person who gave the United States it’s most imaginative set of political programs from 1933 to 1945 that shepherded the country through the Depression and won WWII and consequently its greatest period of prosperity from 1945 until the early 1970s (for a real life sample of this kind or person, see The Making of the New Deal: The Insiders Speak). Bailey wants to do “something big and something important”—to “build things” to “plan modern cities, build skyscrapers 100 stories high… bridges a mile long… airfields…” George Bailey is the big thinker—a “big picture guy”—and his father, Peter Bailey the staunch, sensible, and fundamentally decent localist hero. Both are the kind of people we need now.

In a moment of frank honesty bordering on insensitivity, George tells his father that he does not want to work in the Building and Loan, that he “couldn’t face being cooped up in a shabby little office… counting nickels and dimes.”  His father recognizes the restlessness, the boundless talent and quality, the bridled energy, big-thinking, and high-minded ambition of his son.  Although wounded, the senior Mr. Bailey agrees with George, saying “You get yourself and education and get out of here,” and dies of a stroke the same night—his strategically-placed photo remains a moral omnipresence for the rest of the movie (along with presidential photos to link events to specific years).

One local crises or turn of events after another stymies all of George’s plans to go abroad and change the world just as they seem to be on the cusp of fruition. Rather than world-changer, he ends up as a local fixer for the good—a better, and more energetic version of a local hero, a status that confirms his “wonderful life” at the film’s exuberantly sentimental ending where a 1945 yuletide flash mob descends on the Bailey house thus saving the situation by returning decades worth of good faith, deeds, and subsequent material wealth and prosperity.  But what is it that sets George apart from the rest of the town that comes to depend upon him over the years?

At the age of 12 he saves his brother Harry from drowning (and by historical extension, a U.S. troopship a quarter of a century later), leaving him deaf in one ear.  Shortly thereafter, his keen perception prevents Mr. Gower, the pharmacist (distracted by the news of the death of his college student son during the Spanish Flu pandemic of 1918-1919), from accidentally poisoning another patient.  As an adult, George’s theorizing about making plastics from soybeans by converting a local defunct factory adds to the town’s prosperity and makes a less visionary friend (Sam “hee-haw” Wainwright) a fortune, but not one for himself.

Other than saving the Building and Loan from liquidation, George’s primary victory is marrying his beautiful and wholesome sweetheart—”Marty’s kid sister”—Mary (Donna Reed) and raising a family.  With a cool head and insight and the help of his wife, they single-handedly stop a run on the Building and Loan in its tracks with their own readily-available honeymoon funds.  The goodwill is reciprocated by most of the Savings and Loan’s investors (one notably played by Ellen “Can I have $17.50” Corby, later Grandma Walton).

From there George goes on to help an immigrant family buy their own house and in fact builds an entire subdivision for the town’s earnest and respectable working class, all the while standing up to the local bully: the cartoonishly sinister plutocratic omnipresence and Manachiest counterweight to everything good and decent in town, Mr. Potter (Lionel Barrymore).  Potter is the lingering, unregulated nineteenth-century predatory plutocracy that, in modified form, cooked the economy during 1920s, resulting in the Great Depression.  Even Potter comes to recognize George’s quality and unsuccessfully attempts to buy him off.

During the war, George’s bad ear keeps him out of the fighting (unlike the real Jimmy Stewart who flew numerous combat missions in a B-24), and makes himself useful with such patriotic extracurriculars as serving as an air raid warden, and organizing paper, rubber, and scrap drives.  And yet he seems to have adapted to his fate of being involuntarily tethered to the small financial institution he inherited from his father, and therefore the role of the town’s protector. He seems more-or-less happily resigned to his fate as a thoroughbred pulling a milk wagon.

 Were George Bailey just another guy in Bedford Falls or most towns in the United States, this would indeed be a wonderful life and indeed for most of us it would be.  Even with all of his disappointments, his life is a satisfactory reply to the unanswerable Buddhist question, “how good would you have it?”  On the face of events, George seems to be a great success at the end of the movie.  In case this is not abundantly apparent from the boisterous but benevolent 1940s Christmastime riot of unabashed exuberance—a reverse bank run or bottom-up version of the New Deal or a spontaneous neighborhood Marshall Plan—at the movie’s end. His brother—now a Medal of Honor recipient—proudly proclaims “To George Bailey, the richest man in town.” This is confirmed in the homey wisdom inscribed in a copy of Tom Sawyer by George’s guardian angel (and silly fictional device and concession to comic relief in a story about attempted suicide) Clarence that “no man is a failure who has friends”.

Of course Clarence is introduced into an already minimally realistic story to provide George with the exquisite but equally silly luxury—“a great gift”—of seeing what would have become of the town and its people without him (although to a lover of hot jazz, the business district of Pottersville—an alternate reality to the occasionally overly precious, Norman Rockwell-esque Bedford Falls—looks fairly attractive, with its hot jazz lounges, jitterbugging swing clubs, a billiards parlor, a (God forbid) burlesque hall, and what seems to be an unkind shot at Fats Waller).

In this Hugh Everett-like alternate narrative device and dark parallel universe, he sees that his wife Mary is an unhappy mouse-like spinster working in a (God forbid) library; that Harry drowned as a child and thus was not alive in1944 to save a fully loaded troop transport.  Likewise, everybody else in the town is an embittered, anti-social, outright bad or tragic version of themselves relative to the personally frustrating yet generally wonderful Rated-G version of George’s wonderful life.

The problem is that George is not ordinary; he is no mere careerist, conventionalist, or money-chasing credentialist—he is a quick-thinking, maverick problem-solver with a heart of gold. He is exactly the kind of person we need now, but whom the establishment of our own time despises.  Although harder to identify on sight, in our own time, the charming and attractive Mr. Potter’s of the world have won.

In literary terms, George is not a typical beaten-down loser-protagonist of the modernist canon; he is not a Bartleby the Scribner, a J. Alfred Prufrock, Leopold Bloom, or Willie Lohman, but then neither is his stolid father (George is perhaps more akin to Thomas Hardy’s talented but frustrated Jude Fawley or a better version of James Hilton’s Mr. Chips—characters who might have amounted to more had they not been limited or constrained by external circumstances).

Rather, George is more in keeping with the great tragic-heroic protagonists of the Greeks and Shakespeare (i.e. a person who could have pushed the limits of the humanly possibility), if only he could have gotten up to bat.  He might have done genuinely great things, had his plans gotten off the ground, had the unforeseen chaos of life and social circumstances not intervened.  Just after breaking his father’s heart by revealing his ambitions, George correctly assesses and confides that the old man is a “great guy.”  True enough.  But the conspicuous fact is that the older Bailey is much more on the scale of a local hero, a “pillar of the community”—a necessary type for any town to extinguish the day-to-day brush fires and is therefore perhaps more fully actualized and resigned to his role (even though it kills him mere hours later—or was it George’s announcement?).  But George has bigger ambitions and presumably abilities to match.

In a perfect world, someone like Mr. Bailey, Sr. would be better (and in fact is) cast in the role to which his son is relegated, even though his ongoing David versus Goliath battles with Potter likely contributed to his early death.  George might have found an even more wonderful life if he had gone to college and law school and then gone to Washington to work for Tommy Corcoran and Ben Cohen, or as a project manager of a large New Deal program, or managing war production against the Nazis and Imperial Japanese.  Instead he organizes scrap and rubber drives and admonishes people to turn off their lights during air raid drills.  In a better world, a lesser man could have handled all the relative evils of Bedford Falls.

Of course the alternative is that George is delusional throughout the film, that he is not as great as we are led to believe, that—like most of us—he is not as good as his biggest dreams. But there is nothing in the film to suggest that this is the case.

The moral for our own time is that we needs both kinds of Mr. Baileys—father and the son—and it is clear that in spite of numerous local victories, George could have done far more in the broader world (his less-interesting younger brother, Harry, seems to have unintentionally hijacked George’s plans and makes a good go of them: he goes off to college, lands a plumb research position in Buffalo as part-and-parcel of marrying a rich and beautiful wife, and then disproportionately helps win a world war, and returns, amazingly, as the same happy-go-lucky person complete with our nation’s highest military honor after lunching with Harry and Bess at the Executive Mansion). George is the Rooseveltian top-down planner and social democrat while Mr. Bailey, Sr., is the organic, Jane Jacobs localist.

Even if we accept Capra’s questionable premise that George’s life is the most wonderful of possible alternatives (or at least pretty darned good), the ending is not entirely satisfactory for people used to Hollywood Endings: George’s likable, but absent-minded, Uncle Billy inadvertently misplaces $8,000 dollars (perhaps ten or twenty-fold that amount in 2018 dollars) into Mr. Potter’s hands (a crime witnessed and abetted by Mr. Potter’s silent, wheelchair-pushing flunky, who, even without a uttering single line in the entire movie, is arguably the most despicable person in it—an equally silent counterpart to the photograph of the late Mr. Bailey, Sr.), and his honest mistake is never revealed nor presumably is the money ever recovered.

Mr. Potter’s crime does not come to light, and George is very nearly framed by the incident and driven to despair. Instead of a watery self-inflicted death in the Bedford River, he is happily bailed out (Bailey is bailed out after bailing out the town so many times), first by a homely angel and then by the now prosperous town of the immediate postwar.

The fact that his rich boyhood chum, the affable frat-boyish Sam Wainwright, is willing to extend $25,000 of his company’s petty cash puts the crisis into wider focus and perspective and makes us realize that George was never was really in that much trouble, at least financially (although the SEC might have found such a large transfer to a close friend with a mysterious $8000 deficit to be suspicious).  Wainwright’s telegram is a comforting wink from Capra himself.  Had he not been so distracted by an accumulation of trying circumstances—the daily slings and arrows of being a big fish in Bedford Falls—this kindness of Sam’s and the whole town is something that George might have intuited himself thus preventing his breakdown in the first place.  The bank examiner (district attorney?), in light of the crowd’s vouchsafing George’s reputation, tears up the summons, grabs a cup of kindness and heartily joins in singing “Hark, the Herald Angel Sings.”

Still, the loss of $8,000 in Bedford Falls was a crisis that almost drove George to suicide.  If he had been a manager of wartime industrial production, a similar loss would have been a rounding error that nobody but an accountant would have noticed.

At the movie’s end, George is safe and obviously touched by the outpouring of his community and appreciates just how god things really are (and you just know that any scene that begins with Donna Reed rushing in and clearing an entire tabletop of Christmas wrapping paraphernalia to make room for a torrential charitable cash flow is going to be ridiculously heart-warming). But at the movie’s end George remains as local and provincial as before, he has just been instructed to be happy with the way things have turned out (why not, it’s almost 1946 in America and everything turned out just fine).  His wonderful life has produced a wonderful effort to meet a (still unsolved) crisis.  Just imagine what he could have done with 1940s Federal funding and millions of similarly well-intended people to manage—like those who engineered the New Deal, the WWII mobilization, and the Marshall Plan. Would his name have ranked along with the likes of Harry Hopkins, Rex Tugwell, Adolph Bearle, Raymond Moley, Frances Perkins, John Kenneth Galbraith, Thomas Corcoran, Benjamin Cohen, Averell Harriman, George Marshall, George Kennan, and Eleanor and Franklin Roosevelt themselves?

It is impossible not to surrender to the warmth and decency of this film’s ending, and I realize that this essay has been minute and dissecting in its analysis.  What is the lesson of all of this?  I think the moral to those of us in 2018 is that below the surface of this wonderful movie is a cautionary tale, and that if we are to face the emerging crises of our own time, we will at the very least require a whole Brains Trust of George Baileys in the right places and legions of local people like his father.  There is a danger in shutting out this kind of person. We must also come to recognize the Mr. Potters of big business and their minions who have dominated for the past half-century.  I suspect that they look nothing like Lionel Barrymore.

The Last Realist: George Herbert Walker Bush

By Michael F. Duggan

There was a time not long ago when American foreign policy was based on the sensible pursuit of national interests.  During the period 1989-1992 the United States was led by a man who was perhaps the most well-qualified candidate for the office in its history—a man who had known combat, who knew diplomacy, intelligence, legislation and the legislative branch, party politics, the practicalities of business and organizational administration, and how the executive and its departments functioned.  For those of us in midlife, it seems like only yesterday, and yet in light of what has happened since in politics and policy, it might as well be a lifetime and a world away.  The question is whether his administration was a genuine realist anomaly or merely a preface to what the nation has become.

Regardless, here’s to Old Man Bush: a good one-term statesman and public servant who was both preceded and followed by two-term mediocrities and mere politicians.  A Commander-in-Chief who oversaw what was arguably the most well-executed large-scale military campaign in United States history (followed by poll numbers that might have been the highest in modern times) only to lose the next election.  A moderate in politics and a good man personally who famously broke with the NRA, gave the nation a very necessary income tax hike on the rich (for which his own party never forgave him), but against his better instincts adopted the knee-to-groin campaign tactics of party torpedoes and handlers in what became one of the dirtiest presidential campaigns in US history (1988) and ushered-in the modern period of “gotcha” politics.

Some critics at the time observed that Bush arose on the coattails of others, a loyal subordinate, a second-place careerist and credentialist who silver-medaled his way to the top, a New England blue blood carpetbagger who (along with his sons) ran for office in states far from Connecticut and Maine.  Such interpretations do violence to the dignity, nuance, diversity, and sheer volume of the man’s life.  Bush was the real thing: a public servant—an aristocrat who dedicated most his life to serving the country.  Prior to becoming President of the United States, Bush served in such diverse roles as torpedo bomber pilot, a wildcat oilman, Member of the House of Representatives, Liaison to a newly-reopened China, U.S. Ambassador to the United Nations, Chairman of the RNC, Director of the CIA, and Vice President of the United States.  He was not, however a spotless hero.

Foreign Affairs

The presidency of George Herbert Walker Bush (just plain “George Bush” prior to the late 1990s) was a brief moment, in some respects an echo of the realism that served the nation so well in the years immediately following WWII.

A foreign policy realist in the best sense of the term, Bush was the perfect man to preside over the end of the Cold War, and my sense is that the most notable foreign policy achievements of the Regan presidency probably belong even more to his more knowledgeable vice president with whom he consulted over Thursday lunches.  As president in his own right, it was Bush who, with the help of a first team of pros that included the likes of Brent Scowcroft, James Baker, and Colin Powell, let up Russia gently after the implosion of the USSR (he knew that great nations do not take victory laps), only to be followed by amateurs and zealots who arrogantly pushed NATO right up to Russia’s western border and ushered-in what looks increasingly like a dangerous new Cold War.  If a great statesman/woman is one who has successfully managed at least one momentous world event, than his handling of the end of the Cold War alone puts him into this category.

Desert Storm

Interpreted as a singular U.S. and international coalition response to a violation to territorial sovereignty of one nation by another—and in spite of later unintended consequences—Desert Shield/Storm was a work of art: President Bush gave fair warning (admittedly risky) to allow the aggressor a chance to pull back and reverse course, masterfully sought and got an international mandate and then congressional approval, built a coalition, amassed his forces, went in with overwhelming force and firepower, achieved the goals of the mandate, got the hell out.  But the success or failure of the “Hundred-Hour War” depends on whether it is weighed as a geopolitical “police action” or as just another episode of U.S. adventurism in the Near East, or as some kind of hybrid.

As a stand-alone event then, the campaign was “textbook,” but then in history there is no such thing as a completely discrete event.  Can the operational success of Desert Storm be separated from what others see as a more checkered geopolitical legacy?  Can the success of the “felt necessities of the time” of a theater of combat be tarnished by later, unseen developments?  Was the “overwhelming force” of the Powell Doctrine (which could equally be called the Napoleon, Grant, MacArthur, or LeMay Doctrine) gross overkill and a preface to the “Shock and Awe” of his son’s war in the region?  Was his calculated restriction of press access in a war zone a precursor to later and even more propagandistic wars with even less independent press coverage?

Just as history never happens for a single reason, nor is any victory truly singular, pure, and unalloyed.  Twenty-six years on, I realize that my rosy construction of what has since become known as the First Gulf War (or the Second Iraq War in the interpretation of Andrew Bacevich) is not shared by all historians.  Questions remain: was Saddam able to invade Kuwait because Bush and his team were distracted by momentous events in Europe?  Was the Iraqi invasion merely a temporary punitive expedition that could have been prevented if Kuwait hadn’t aggressively undercut Iraqi oil profits?  Would Hussein have withdrawn his forces on his own after sufficiently making his point?  Was April Glaspie speaking directly for the President Bush or Secretary Baker when she met with the Iraqi leader on July 25, 1990?  War is a failure of policy, and could the events leading up to the invasion (including public comments made by Baker’s spokesperson, Margaret Tutwiler) have been seen by the Iraqis as a green light in a similar way that the North Koreans could have construed Acheson’s “Defensive Perimeter” speech to the National Press Club in early 1950 as such?  (See Bartholomew Sparrow, The Strategist, Brent Scowcroft and the Call of National Security. 420-421).

Some historians have been more critical in their “big picture” assessments of Desert Storm, claiming that when placed in the broader context of an almost four-decade long American war for the greater Middle East, this was just another chapter in a series of misled escalations (See generally Bacevich, America’s War for the Greater Middle East, A Military History).  In this construction too, the war planners had not decapitated the serpent and had left Hussein’s most valuable asset—the Republican Guard—mostly intact to fight another day against an unsupported American ally who Mr. Bush had arguably encourage to rise up, the Iraqi Kurds (as well as Shiites).

While some of these points are still open questions, the mandate of the U.N. Security Council resolution did not include taking out Hussein.  In light of what happened after 2003, when we did topple the regime, Bush I and his planners seem all the more sensible, in my opinion.  Moreover the “Highway of Death” was beginning to look like just that—a traffic jam of gratuitous murder—laser-guided target practice, “a turkey shoot”against a foe unable defend himself, much less fight back.  With the Korean War as historical example, Scowcroft was cognizant of the dangers implicit in changing or exceeding the purely military goals of a limited mandate in the face of apparent easy victory.  Having met the stated war aims, Powell and Scowcroft both advocated ceasing the attack as did Dick Cheney.  (See Sparrow, The Strategist, Brent Scowcroft and the Call of National Security. 414-415).

When second-guessed about why the U.S. did not “finish the job,” his advisors answered with now haunting and even prophetic rhetorical questions about the wisdom of putting U.S. servicemen between Sunnis and Shiites  (James Baker’s later observation about the war in the Balkans that “[w]e don’t have a dog in that fight” seems to have applied equally to internal Iraqi affairs).  Besides, it would have made no sense to remove a powerful secular counterbalance to Iran, thus making them the de facto regional hegemon.  Did the U.S. “abandon” Iraq while on the verge of “saving” it?  Should the U.S. have “stayed” (whatever that means)?  My takeaway from the history of outsiders in the Middle East is that the only thing more perilous than “abandoning” a fight in the region once apparent victory is secured is to continue fighting, and that once in, there is no better time to get out than the soonest possible moment.  It would seem that the history of U.S. adventures in Iraq since 2003—the Neocon legacy of occupation and nation-building—speaks for itself.

Bush’s apparently humanitarian commitment of American forces to the chaos of Somalia in the waning days of his administration still baffles realist sensibilities and seems to have honored Bush’s own principles in the breach.  It simply makes no sense.  One can claim that it was purely a temporary measure that grew under the new administrations, but it is still hard to square with the rest of Bush’s foreign policy.

Of course there were other successes and failures of a lesser nature: high-handedness in Central America that included a justified but excessive invasion of Panama.  The careful realist must also weigh his masterful handling of the demise of the Soviet Union with what looks like a modest and principled kind of economic globalization and what appears to be a kind of self-consciously benevolent imperialism: the United States as the good cop on the world beat.  The subsequent catastrophic history of neoliberal globalization and of U.S. adventurism have cast these budding tendencies in a more sobering light.

Politics and Domestic Policy

Domestically, Bush’s generous instincts came to the fore early on and reflected the Emersonian “Thousand Points of Light” of his nomination acceptance address, and he did more than most people realize.  He gave us the Americans with Disability Act (ADA)—one of the most successful pieces of social legislation of recent decades—the modest Civil Rights Act of 1991, the 1990 amendment to the Clean Air Act, a semiautomatic rifle ban, successfully handled the consequences of the Savings and Loan Crisis, and of course he put David Souter on the High Court.  Perhaps he did not know how to deal with the recession of 1991.  My reading is that the recession was an ominous initial rumbling of things to come, as American workers increasingly became victims of economic globalization.  Some historians believe that the good years of the 1990s owe a fair amount to Bush’s economic polices, including the budge agreement of 1990.  Bush fatefully underestimated the rise of the far right in his own party, making his plea for a “kinder, gentler” nation and political milieu a tragic nonstarter.  His catch phrase from the 1980 campaign characterizing the absurdity of supply-side economics as “voodoo economics” was spot-on, but was another apostasy that true-believers in his own party were unlikely to forget or forgive.  Certainly he did not do enough to address the AIDS crisis.

It is shocking that a man of Bush’s sensibilities and personal qualities conducted the presidential campaign of 1988 the way he did.  Against a second-rate opponent, the “go low,” approach now seems like gross and unnecessary overkill—a kind of political “Highway of Death”—that was beneath the dignity of such an honorable man.  On a similar note, it is hard to understand his occasional hardball tactics, like the bogus fight he picked with Dan Rather on live television at the urging of handlers.  Perhaps it was to counter the charges of his being a “wimp.”

Again, this approach seems to have been completely unnecessary—overreaction urged by politicos and consultants from the darker reaches of the campaign arts.  How is it even possible that a playground epithet like wimp would even find traction against a man of Bush’s demonstrated courage, honor, and commitment?  All anybody had to do was remind people that he youngest navy pilot in the Second World War who had enlisted on the first day he legally could, and that he was fished out of the Pacific after being shot down in an Avenger torpedo bomber (but then Bush embodied an ethos of aristocratic modesty and the idea that one did not talk about oneself, much less brag); by comparison, the rugged Ronald Reagan never went anywhere near a combat zone (as a documentary on the American Experience noted, “Bush was everything Regan pretended to be”: a war hero, college athlete, and a family man who children loved unconditionally).  Not sure if Clinton ever made any pretense of fortitude.

We ask our presidents to succeed in two antithetical roles: that of politician and of statesmen, and in recent years, the later has triumphed seemingly at the expense of the former.  Style has mostly trumped substance, something that underscores a flaw in our system and what is has become.  As casualties of reelection campaigns against charismatic opponents, Gerald Ford and “Bush 41” might be a metaphor for this flaw and of our time and a lesson emphasizing the fine distinction that a single-term statesman is generally superior and preferable to a more popular two two-term politician.  Reagan, Clinton, Bush 43, and Obama were all truly great politicians and unless you were specifically against them or their policies, there was a reasonable chance that they could win you over on one point or another with style, communication skills, and magnetic charm.  That said, and unlike the senior Bush, I would contend that there is not a genuine statesman in that group.

It is difficult for any president to achieve greatness in either foreign or domestic affairs, much less in both (as a latter day New Dealer, I would say that FDR may have been the last to master both).  George Herbert Walker Bush was a good foreign policy president and not bad overall—a leader at the heart of a competent administration.  By all accounts, he was good man overall and the people who knew him are heaping adjectives on is memory: dignity, humility, honor, courage, class—a good president and a notable American public servant.  But ultimately personal goodness has little to do with the benevolence or harm of policy, to paraphrase Forrest Gump, good is what good does (some policy monsters are personally charming and even decent while some insufferable leaders may produce great and high-minded policy), and as aging news transforms with greater circumspect into history, the jury is still out on much of the complex legacy of Bush I.

Subsequent events have cast doubt on what seemed at the time to be spotless successes, and realistic gestures now seem more like preface to less restrained economic internationalism and military adventurism.  Still, I am willing to give the first President Bush the benefit of the doubt on interpretations of events still in flux.  Just in writing this, and given what has happened in American politics and policy ever since, I have the sinking feeling that we not see his like again for a long time, if ever again.

Geoffrey Parker

Book Review

Geoffrey Parker, Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century, Yale University Press, 2014, 904 pages.

Crises, Then and Now

Reviewed by Michael F. Duggan

This book is about a time of climate disasters, never-ending wars, economic globalism complete with mass human migration, imbalances, and subsequent social strife–a period characterized by unprecedented scientific advances and backward superstition.  In other words, it is a world survey about the web of events known as the Seventeenth Century.  Although I bought it in paperback a number of years ago, I recently found a mint condition hardback copy of this magisterial tome by master historian, Geoffrey Parker (Cambridge, St. Andrews, Yale, &c.), and felt compelled to write about it, however briefly.  I have always been drawn to this century because of its contrasts as the one that straddles the transition from the Early Modern to the Ages of Reason and Enlightenment and more broadly marks the final shift from Medieval to Modern (even before the Salem colonists hanged neighbors suspected of witchcraft, Leibniz and Newton had independently begun to formulate the calculus).

In 1959, historian H. R. Trevor-Roper presented the macro-historical thesis of the “General Crisis” or the interpretive premise that the Seventeenth Century can be characterized by an overarching series of crises from horrible regional wars (e.g. the 30 Years Wars, the English Civil War and its spillover into Scotland and Ireland) and rebellions, to widespread human migration and the subsequent spread of disease, any number of specific plagues, global climate change, and a long litany of some of the most extreme weather events in recorded history (e.g. the “little ice age”), etc.  When I was in graduate school, I had intuited this premise (perhaps after reading Barbara Tuchman’s A Distant Mirror, about the “calamitous Fourteenth Century”), but was hardly surprised upon discovering that Trevor-Roper had scooped the idea by 40 years.

Parker has taken this thesis and generalized it in detail beyond Europe to encompass the entire world–to include catastrophic events and change throughout the Far East, Russia, China, India, Persia, the greater Near East, Africa, North America, etc.  Others, including Trevor-Roper himself, also saw this in terms of global trends and scope, but, to my knowledge, Parker’s book is the fullest and most fleshed-out treatment.  It is academic history, but is well-written (and readable for a general audience), and well-researched history on the grandest of scales.  For provincial Western historians (such as myself), the broader perspective is eyeopening and suggestive of human commonality rather than divergence; we are all a part of an invasive plague species and we are all victims of events, nature, and our own nature.

Although I am generally skeptical of macro interpretive theories/books that try to explain or unify everything that happened during a period under a single premise–i.e. the more a theory tries to explain, the more interesting and important, but the weaker is usually is as a theory and therefore the less it explains (call it a Heisenberg principle of historiography)–this one may to be on to something, at least as description.  The question(s), I suppose, is the degree to which the events of this century, overlapping or sequential in both geography and time, are interconnected or emerge from common causes or if they were a convergence of factors both related and discrete, or rather is the century a crisis, a sum of crises, or both?  To those who see human history in the broadest of terms–in terms of of the environment, of humankind as a singular prong of biology, and therefore of human history as an endlessly interesting and increasingly tragic chapter of natural history–this book will be of special interest.

As someone who thinks that one of the most important and productive uses of history is to inform policy and politics, it is apparent (obvious, really) that the author intends this book to be topical–a wide-angle and yet detailed account of another time for our time.  In general the Seventeenth Century is good tonic for those who believe that history is all sunshine and roses or that human progress (such as it is) is all a rising road.  A magnum opus of breathtaking scope and ambition, this book is certainly worth looking at (don’t be put off by its thickness, you can pick it up at any time and read a chapter here or there).

 

 

Fat Man and Little Boy

I wrote this for the 70th Anniversary of the atomic bombings of Japan.  It appeared in an anthology at Georgetown University.  This is taken from a late draft, but the editing is still a bit rough.

 

Roads Taken and not Taken: Thoughts on “Little Boy” and “Fat Man” Plus-70

By Michael F. Duggan

We knew the world would not be the same.  A few people laughed, a few people cried. Most people were silent.  I remembered the line from the Hindu scripture, the Bhagavad Gita… “I am become Death, the destroyer of worlds.”

-Robert Oppenheimer

 

When I was in graduate school, I came to characterize perspectives on the decision to drop the atomic bombs on Japan into three categories.

The first was the “Veterans Argument”—that the dropping of the bombs was an affirmative good.  As this name implies, it was a position embraced by some World War Two veterans and others who had lived through the war years and seems to have been based on lingering sensibilities of the period.  It was also based on the view the rapid end of the war had having saved many lives—including their own, in many cases—and that victory had ended an aggressive and pernicious regime.  It also seemed tinged with an unapologetic sense of vengeance and righteousness cloaked as simple justice.  They had attacked us, after all—Remember Pearl Harbor, the great sneak attack?  More positively, supporters of this position would sometimes cite the fact of Japan’s subsequent success as a kind of moral justification for dropping the bombs.

Although some of the implications of this perspective cannot be discounted, I tended to reject it; no matter what one thinks of Imperial Japan, the killing of more than 150 thousand civilians can never be an intrinsic good.  Besides there is something suspect about the moral justification of horrible deeds by citing all of the good that came after it, even if true.1

I had begun my doctorate in history a few years after the 50th anniversary of the dropping of the Hiroshima and Nagasaki bombs, and by then there had been a wave of “revisionist” history condemning the bombings as intrinsically bad, as inhumane, and unnecessary—as “technological band aides” to end a hard and bitter conflict.  The argument was that by the summer of 1945, Japan was on the ropes—finished—and would have capitulated within days or weeks even without the bombs.  Although I had friends who subscribed to this position, I thought that it was unrealistic in that it interjected idealistic sensibilities and considerations that seemed unhistorical to the period and the “felt necessities of the times.”

This view is was also associated with a well-publicized incident of vandalism against the actual Enola Gay at a Smithsonian exhibit that ignited a controversy that forced the museum to change its interpretive text to tepid factual neutrality.

And then there was a kind of middle-way argument—a watered-down version of the first—asserting that the dropping of the bombs—although not intrinsically good—was the best of possible options.  The other primary option was a two-phased air-sea-land invasion of main islands of Japan: Operation Olympic scheduled to begin on November 1, 1945, and Operation Coronet, scheduled for early March 1946 (the two operations were subsumed under the name Operation Downfall).  I knew people whose fathers and grandfathers were still living who had been in WWII, and who believed with good reason that they would have been killed fighting in Japan.  It was argued that the American casualties for the war—approximately 294,000 combat deaths—would have been multiplied two or three fold if we had invaded, to say nothing about the additional millions of Japanese civilians that would have likely died resisting.  The Okinawa campaign of April-June 1945, the viciousness and intensity of the combat there and appalling casualties of both sides were regarded as a kind of microcosm, a prequel of what an invasion of Japan would be like.2

The idea behind this perspective was one of realism, that in a modern total war against a fanatical enemy, one took off the gloves in order to end it as soon as possible.  General Curtis LeMay asserts that it was the moral responsibility of all involved to end the war as soon as possible, and if the bombs ended it by a single day, then using them was worth the cost.3  One also heard statements like “what would have happened to an American president who had a tool that could have ended the war, but chose not to use it, and by doing so doubled our casualties for the war?”  It was simple, if ghastly, math: the bombs would cost less in terms of human life than an invasion.  With an instinct toward the moderate and sensible middle, this was the line I took.

In graduate school, I devoured biographies and histories of the Wise Men of the World War Two/Cold War era foreign policy establishment—Bohlen, Harriman, Hopkins, Lovett, Marshall, McCloy, Stimson, and of course, George Kennan.  When I read Kai Bird’s biography, Chairman, John McCloy and the Making of the American Foreign Policy Establishment, I was surprised by some of the back stories and wrangling of the policy makers and the decisions behind the dropping of the bombs.4  It also came as a surprise that John McCloy (among others), had in fact vigorously opposed the dropping of the atomic bombs, perhaps with very good reason.

Assistant Secretary of War John McCloy was nobody’s idea of a dove or a pushover.  Along with his legendary policy successes during and after WWII, he was controversial for ordering the internment of Japanese Americans and for not bombing the death camps in occupied Europe, because doing so would divert resource from the war effort and victory.  He was also the American High Commissioner of occupied Germany after the war and had kept fairly prominent Nazis in their jobs and kept out of prison German industrialists who had played ball with the Nazi regime. Notably, in the1960s, he was one of the only people on record who flatly stood up to President Lyndon Johnson after getting the strong-armed “Johnson treatment” and was not ruined by it.  And yet this tough-guy hawk was dovish on the issue of dropping the atomic bombs.

The story goes like this: In April and May, 1945, there were indications that the Japanese were seeking a settled end to the war via diplomatic channels in Switzerland and through communications with the Soviets—something that was corroborated by U.S. intelligence.5 Armed with this knowledge, McCloy approached his boss, Secretary of War, and arguably father of the modern U.S. foreign policy establishment, “Colonel” Henry L. Stimson.  McCloy told Stimson that the new and more moderate Japanese Prime Minister, Kantaro Suzuki, and his cabinet, were looking for a face-saving way to end the war.  The United States was demanding an unconditional surrender, and Suzuki indicated that if this language was modified, and the Emperor was allowed to remain as a figurehead under a constitutional democracy, Japan would surrender.

Among American officials, the debates on options for ending the war included many of the prominent players, policy makers and military men like General George C. Marshall, Admiral Leahy and the Chiefs of Staff, former American ambassador to Japan, Joseph Grew, Robert Oppenheimer (the principle creator of the bomb), and his Scientific Advisory Panel to name but a few.  It also included President Harry Truman.  Among the options discussed was whether or not to give the Japanese “fair warning” and if the yet untested bomb should be demonstrated in plain view of the enemy.  There were also considerations of deterring the Soviet, who had agreed at Yalta to enter the war against Japan, from additional East Asian territorial ambitions.  Although it was apparent to Grew and McCloy, that Japan was looking for a way out, therefore making an invasion unnecessary, the general assumption was that if atomic bombs were functional, they should be used without warning.

This was the recommendation of the Interim Committee, that included soon-to-be Secretary of State, James Byrnes, and which was presented to Truman by Stimson on June 6.6  McCloy disagreed with these recommendations and cornered Stimson in his own house on June 17th.  Truman would be meeting with the Chiefs of Staff the following day on the question of invasion, and McCloy implored Stimson to make the case that the end of the war was days or weeks away and that an invasion would be unnecessary.  If the United States merely modified the language of unconditional surrender and allowed for the Emperor to remain, the Japanese would surrender under de facto unconditional conditions.  If the Japanese did not capitulate after the changes were made and fair warning was given, the option for dropping the bombs would still be available.  “We should have our heads examined if we don’t consider a political solution,” McCloy said.  As it turned out, he would accompany Stimson to the meeting with Truman and the Chiefs.

Bird notes that the meeting with Truman and the Chiefs was dominated by Marshall and focused almost exclusively on military considerations.7  As Bird writes “[e]ven Stimson seemed resigned now to the invasion plans, despite the concession he had made the previous evening to McCloy’s views.  The most he could muster was a vague comment on the possible existence of a peace faction among the Japanese populace.”  The meeting was breaking up when Truman said “No one is leaving this meeting without committing himself.  McCloy, you haven’t said anything.  What is your view?” McCloy shot a quick glance to Stimson who said to him, “[s]ay what you feel about it.”  McCloy had the opening he needed.8

McCloy essentially repeated the argument he had made to Stimson the night before.  He also noted that a negotiated peace with Japan would preclude the need for Soviet assistance, therefore depriving them of any excuse of an East Asian land grab.   He also committed a faux pas by actually mentioning the bomb by name and suggesting that it be demonstrated to the Japanese.  Truman responded favorably, saying “That’s exactly what I’ve been wanting to explore… You go down to Jimmy Byrnes and talk to him about it.”9  As Bird points out,

[b]y speaking the unspoken, McCloy had dramatically altered the terms of the debate.  Now it was no longer a question of invasion.  What had been a dormant but implicit option now became explicit.  The soon-to-be tested bomb would end the war, with or without warning.  And the war might end before the bomb was ready.” but increasingly the dominant point of view was that the idea of an invasion had been scrapped and in the absence of a Japanese surrender, the bombs would be dropped.10

After another meeting with what was called the Committee of Three, most of the main players agreed “that a modest change in the terms of surrender terms might soon end the war” and that “Japan [would be] susceptible to reason.”11  Stimson put McCloy to work at changing the terms of surrender, specifically the language of Paragraph 12 that referenced the terms that the Japanese had found unacceptable.  McCloy did not mention the atomic bomb by name.  But by now however, Truman was gravitating toward Byrnes’s position of using the bombs.

After meeting with the president on July 3, Stimson and McCloy “solicited a reluctant invitation” to attend the Potsdam Conference, but instead of traveling with the President’s entourage aboard the USS Augusta, they secured their own travel arrangements to Germany.  Newly sworn-in Secretary of State, James Byrnes, would sail with the president and was a part of his onboard poker group.12  The rest, as they say, is history.

At Potsdam, Truman was told by the Soviets that Japan was once again sending out feelers for a political resolution. Truman told Stalin to stall them for time, while reasserting the demand for unconditional surrender in a speech where he buried the existence of the bombs in language so vague, that it is likely that the Japanese leaders did not pick up on the implications.13  Japan backed away.  Truman’s actions seem to suggest that, under Byrnes’s influence (and perhaps independent of it), he had made his mind to drop the bombs and wanted to sabotage any possibly of a political settlement.  As Bird notes, “Byrnes and Truman were isolated in their position; they were rejecting a plan to end the war that had been endorsed by virtually all of their advisors.”14  Byrnes’s position had been adopted by the president over the political option of McCloy.  As Truman sailed for home on August 6, 1945, he received word that the uranium bomb nicknamed “Little Boy” had been dropped on Hiroshima with the message “Big bomb dropped on Hiroshima August 5 at 7:15 P.M. Washington time.  First reports indicate complete success which was even more conspicuous than earlier test.” Truman characterized the attack as “The greatest thing in history.”15  Three days later the plutonium bomb “Fat Man” fell on Nagasaki.  The Soviets entered the fighting against Japan on August 8.  The war was over.

Given Byrnes’s reputation as a political operative of rigid temperament and often questionable judgment, one can only wonder if the dropping of the bombs was purely gratuitous.  Did he and he president believe that the American people wanted and deserved their pound of flesh almost four years after Pear Harbor and some of the hardest combat ever fought by U.S. servicemen?16  Of course there were also the inevitable questions of “what would Roosevelt have done?”

With events safely fixed in the past, historians tend to dislike messy and problematic counterfactuals, and one can only wonder if McCloy’s plan for a negotiated peace would have worked.  One of the most constructive uses of history is to inform present-day policy decisions through the examination of what has worked and what has not worked in the past, and why.  Even so the vexing—haunting—queries about the necessity of dropping the atomic bombs remain as open questions.  The possibility for a political resolution to the war seems at the very least to have been plausible.  The Japanese probably would have surrendered by November, perhaps considerably earlier, as the result of negotiations, but there is no way to tell for certain.17  As it was, in August 1945, Truman decided to allow the Emperor to stay on anyway, and our generous reconstruction policies turned Japan (and Germany) into a miracle of representative liberal democracy and enlightened capitalism.

Even if moderate elements in the Japanese government had been able to arrange an effective surrender, there is no telling whether the Japanese military, and especially the army, would have gone along with it; as it was—and after two atomic bombs had leveled two entire cities—some members of the Japanese army still preferred self-destruction over capitulation, and a few even attempted a coup against the Emperor to preempt his surrender speech to the Japanese People.

This much is certain: our enemies in the most costly war in human history have now been close allies for seven decades (as the old joke that goes, if the United States had lost WWII, we would now be driving Japanese and German cars).  Likewise our Cold War enemy, the Russians, in spite of much Western tampering within their sphere of influence, now pose no real threat to us.  But the bomb remains.

Knowledge may be lost, but an idea cannot be un-invented; as soon as a human being put arrow to bow, the world was forever changed.  The bomb remains.  It remains in great numbers in at least nine nations and counting, in vastly more powerful forms (the hydrogen bomb) with vastly more sophisticated means of delivery.  It is impossible to say whether the development and use of the atomic bomb was and is categorically bad, but it remains for us a permanent Sword of Damocles and the nuclear “secret” is the knowledge of Prometheus.  It is now a fairly old technology, the same vintage as a ’46 Buick.

The bombings of Hiroshima and Nagasaki broke the ice about the use of these weapons in combat and will forever live as a precedent for anyone else who may use it.  The United States is frequently judgmental of the actions and motives of other nations, and yet the U.S. and the U.S. alone is the only nation to have used nuclear weapons in war.  As with so many people in 1945 and ever since, Stimson and Oppenheimer both recognized the atomic bomb had changed everything.  More than any temporal regime, living or dead, it and its progeny remain a permanent enemy of mankind.

 

Notes

  1. For a discussion of the moral justification in regard to dropping the atomic bombs, see John Gray, Black Mass, New York: Farrar, Strauss and Giroux, 2007, pp 190-191.
  2. For an account of the fighting on Okinawa, see Eugene Sledge, With the Old Breed, New York: Random House, 1981.
  3. LeMay expresses this sentiment in an interview he gave for the 1973 documentary series, The World at War.
  4. Generally Chapter 12, “Hiroshima”. Kai Bird, Chairman, John J. McCloy and the Making of the American Establishment, New York: Simon and Schuster, 1992, pp. 240-268.
  5. Bird, p. 242.
  6. Bird, p. 244.
  7. Bird, p. 245.
  8. Bird, p. 245.
  9. Bird, p. 246.
  10. Bird, p. 250.
  11. Bird, pp. 247-248.
  12. Bird, p. 249-250. Averell Harriman and Elie Abel, Special Envoy to Churchill and Stalin, 1941-1946, New York: Random House, 1975, 493.Bird, p. 251.  It should be noted that most of the top American military commanders opposed dropping the atomic bombs on Japan. As Daniel Ellsberg observes: “The judgment that the bomb had not been necessary for victory—without invasion—was later expressed by Generals Eisenhower, MacArthur, and Arnold, as well as Admirals Leahy, King, Nimitz, and Halsey. (Eisenhower and Halsey also shared Leahy’s view that it was morally reprehensible.)  In other words, seven out of eight officers of five star rank in the U.S. Armed Forces in 1945 believed that the bomb was not necessary to avert invasion (that is, all but General Marshall, Chief of Staff of the Army, who alone believed that an invasion might have been necessary.’ [Emphasis added by Ellsberg].  See Daniel Ellsberg, The Doomsday Machine, New York: Bloomsbury, 2017, pp262-263.                            As it happened, Eisenhower was having dinner with Stimson when the Secretary of War received the cable saying that the Hiroshima bomb had been dropped and that it had been successful.  “Stimson asked the General his opinion and Eisenhower replied that he was against it on two counts.  First, the Japanese were ready to surrender and it wasn’t necessary to hit them with that awful thing.  Second, I hate to see our country be the first to use such a weapon.  Well… the old gentleman got furious.  I can see how he would.  After all, it had been his responsibility to push for all of the expenditures to develop the bomb, which of course he had the right to do, and was right to do.” See John Newhouse War and Peace in the Nuclear Age, New York: Alfred A. Knopf, 1989, p. 47.  Newhouse also points out that there were numerous political and budgetary considerations related to the opinions of the various players involved in developing and dropping the bombs.  One can only hope that budgetary responsibility/culpability did not (or does not) drive events.
  13. Harriman, p. 293.
  14. For his own published account of this period, see James F. Byrnes, Speaking Frankly, New York: Harper Brothers & Company, 1947.
  15. See Robert Dallek, The Lost Peace, New York: Harper Collins, 2010, p. 128. Dallek makes hit point, basing it on the Strategic Bombing Survey, as well as the reports of Truman’s own special envoy to Japan after the war in October 1945.