Category Archives: Uncategorized

Edward O. Wilson at 90

Michael F. Duggan

The biologist Edward O. Wilson turned 90 on Monday, June 10th.

Arguably the most influential living scientist, he is a world authority on ants, the “father of sociobiology” and a leader of the biodiversity movement who coined the terms “biophilia” and Eremozoic (the latter to describe the geological period dominated by human beings sometimes called the Anthopocene).

In the 1970s he gave much needed firepower to the “Nature” side of Nature/Nurture discourse and infuriated a lot of social “scientists” (not long ago he wrote that “[h]istory makes no sense without prehistory, and prehistory make no sense without biology”).

He has written 29 books–eleven of them since the age of 80–and won two Pulitzer Prizes (one for the classic 1978 On Human Nature). His newest book Genesis: The Deep Origins of Society, just came out. His 2016 book “Half-Earth” tells us what we need to do to save the planet.

In his twin volumes, The Social Conquest of Earth and The Meaning of Human Existence–developing some ideas of Darwin from The Descent of Man–he posits the view that human morality is the result of tensions between the pressures of individual selection (and thus selfishness) and the pressures of group selection (altruism/empathy). He believes that the success of the human species (success to a fault) is largely due to our eusociability, a cooperative social structure (strategy?) shared in very different form with social insects like ants and termites, creatures that have also taken over the world.

One need not agree with him on all points he has made over a long and illustrious career to recognize his importance.

I met him in the early 2000s and he inscribed my first edition copy of On Human Nature. Seemed to be a first-rate guy. Happy birthday.

Six Books on the Environment

John Gray, Straw Dogs

Roy Scranton, Learning to Die in the Anthropocene and We’re Doomed, Now What?

Jedediah Purdy, After Nature

Edward O. Wilson, Half Earth

Adam Frank, Light of the Stars

Reviewed by Michael F. Duggan

Modern urban-industrial man is given to the raping of anything and everything natural on which he can fasten his talons.  He rapes the sea; he rapes the soil; the natural resources of the earth.  He rapes the atmosphere.  He rapes the future of his own civilization. Instead of living off of nature’s surplus, which he ought to do, he lives off its substance. He would not need to do this were he less numerous, and were he content to live a more simple life.  But he is prepared neither to reduce his numbers nor to lead a simpler and more healthful life.  So he goes on destroying his own environment, like a vast horde of locusts.  And he must be expected, persisting blindly as he does in this depraved process, to put an end to his own existence within the next century.  The years 2000 to 2050 should witness, in fact, the end of the great Western civilization.  The Chinese, more prudent and less spoiled, no less given to over-population but prepared to be more ruthless in the control of its effects, may inherent the ruins.

                        -George Kennan, diary entry, March 21, 1977

No witchcraft, no enemy had silenced the rebirth of new life in this stricken world… The people had done it themselves.

                        -Rachel Carson

We all see what’s happening, we read it in the headlines every day, but seeing isn’t believing and believing isn’t accepting.

-Roy Scranton

Among the multitude of voices on the unfolding environment crises, there are five that I have found to be particularly compelling.  These are John Gray, Jedediah Purdy, Roy Scranton, the biologist, Edward O. Wilson, and most recently the physicist, Adam Frank.  This post was originally intended to be a review of Scranton’s newest book, a collection of essays called We’re Doomed. Now What? but I have decided instead to place that review in a broader context of writing on the environment. 

I apologize ahead of time for the length and roughness—the almost complete absence of editing—of this review/essay (the endnotes remain unedited, unformatted, and incomplete, and a few remain the the body of the text).  This is a WORKING DRAFT. The introduction is more or less identical to an article of mine that ran in the CounterPunch in December 2018.  

Introduction: Climate Change and the Limits of Reason

Is it too late to avoid a global environmental catastrophe?  Does the increasingly worrisome feedback from the planet indicate that something like a chaotic tipping point is already upon us?  Facts and reason are slender reeds relative to entrenched opinions and the human capacity for self-delusion.  I suspect that neither this essay nor others on the topic are likely to change many minds.   

With atmospheric carbon dioxide at its highest levels in three to five million years with no end in its increase in sight, the warming, rising, and acidification of the world’s oceans, the destruction of habitat and the cascading collapse of species and entire ecosystems, some thoughtful people now believe we are near, at, or past a point of no return.  The question may not be whether or not we can turn things around, but rather how much time is left before a negative feedback loop from the environment as it was becomes a positive feedback loop for catastrophe.  It seems that the answer is probably a few years to a decade or two on the outside, if we are not already there.  The mild eleven-thousand year summer—the Holocene—that permitted and nurtured human civilization and allowed our numbers to grow will likely be done-in by our species in the not-too-distant future.

Humankind is a runaway project.  With a world population of more than 7.686 billion, we are a Malthusian plague species.  This is not a condemnation or indictment, nor some kind of ironic boast.  It is an observable fact.  The evidence is now overwhelming that we stand at a crossroads of history and of natural history, of nature and our own nature.  The fact that unfolding catastrophic change is literally in the air is undeniable.  But before we can devise solutions of mitigation, we have to admit that there is a problem.                

In light of the overwhelming corroboration—objective, tested and retested readings of atmospheric CO2 levels, the acidification of the oceans, the global dying-off of the world’s reefs, and the faster-than-anticipated melting of the polar and Greenland icecaps and subsequent rises in mean ocean levels—those who still argue that human-caused global climate change is not real must be regarded frankly as either stupid, cynical, irrational, ideologically deluded, willfully ignorant or distracted, pathologically stubborn, terminally greedy, or otherwise unreasonably wedded to a bad position in the face of demonstrable facts.  There are no other possibilities by which to characterize these people and, in practical terms, the difference between these overlapping categories is either nonexistent or trivial.  If this claim seems rude and in violation of The Elements of Style, then so be it.1  The time for civility and distracting “controversies” and “debates” is over, and I apologize in no way for the tone of this statement.  It benefits nobody to indulge cynical and delusional deniers as the taffrail of the Titanic lifts above the horizon.

Some commentators have equated climate deniers with those who deny the Holocaust and chattel slavery.  Although moral equations are always a tricky business, it is likely that the permanent damage humans are doing to the planet will far exceed that of the Nazis and slavers.  The question is the degree to which those of us who do not deny climate change but who contribute to it are as culpable as these odious historical categories.  Perhaps we are just the enablers—collaborators—and equivalent of those who knew of the crimes and who stood by and averted their eyes or else knowingly immersed themselves in the immediate demands and priorities of the private life.  No one except for the children, thrown unwittingly into this unfolding catastrophe, is innocent.

The debate about whether human activity has changed the global environment is over in any rational sense.  Human-caused climate change is real.  To deny this is to reveal oneself as being intellectually on the same plain as those who believe that the Earth is the flat center of the universe, or who deny that modern evolutionary theory contains greater and more accurate explanatory content than the archetypal myths of revealed religion and the teleological red herring of “Intelligent Design Theory.”  The remaining questions will be over the myriad of unknowable or partially or imperfectly knowable details of the unfolding chaos of the coming Eremocene (alternatively Anthropcene)2and the extent of what the changes and consequences will be, their severity, and whether or not they might still be reversed or mitigated, and how.  The initial question is simply whether or not it is already too late to turn things around.

We have already changed the planet’s atmospheric chemistry to a degree that is possibly irreparable.  In 2012 atmospheric CO2 levels at the North Pole exceeded 400 parts per million (up from the pre-industrial of around 290ppm).  At this writing carbon dioxide levels are around 415ppm.  This is not an opinion, but a measurable fact.  Carbon dioxide levels can be easily tested even by people who do not believe that human activity is altering the world’s environment.  Even if the production of all human-generated carbon was stopped today, the existing surfeit will last for a hundred thousand years or more if it is not actively mitigated.3  Much of the damage therefore is already done—the conditions for catastrophic change are locked in place—and we are now just waiting for the effects to manifest as carbon levels continue to rise unabated and with minor plateaus and fluctuations.

Increases in atmospheric carbon levels have result in an acidification of the oceans.  This too is an observable and quantifiable fact.  The fact that CO2 absorption by seawater results in its acidification and the fact that atmospheric carbon dioxide traps heat more effectively and to a greater extent than oxygen are now tenets of elementary school-level science and are in no way controversial assertions.  If you do not acknowledge both of these facts, then you do not really have an opinion on global climate change or its causes. 

As it is, the “climate debate”—polemics over the reality of global climate change—is not a scientific debate at all, but one of politics and political entertainment pitting testable/measurable observations against the dumb and uninformed denials of the true believers who evoke them or else the cynics who profit from carbon generation (the latter are reminiscent of the parable of the man who is paid a small fee to hang himself).4 Some general officers of the United States military are now on the record stating that climate change constitutes the greatest existing threat to our national security.5

Some deniers reply to the facts of climate change with anecdotal observations about the weather—locally colder or snowier than usual winters in a given region are a favorite distraction—with no heed given to the bigger picture (never mind the fact that the cold or snowy winters that North America has experienced since 2010 were caused by a dip in the jet stream caused by much warmer than usual air masses in Eurasia that threw the polar vortex off of its axis and down into the lower 48 states while at times Greenland basked in 50 degree sunshine). 

An effective retort to this kind of bold obtuseness is a simple and well-known analogy: the climate is like your personality and the weather is like your mood.  Just because you are sad for a day or two does not mean that you are a clinical depressive any more than a locally cold winter set in the midst of the two hottest decades ever recorded worldwide does not represent a global cooling trend.  Some places are likely to cool off as the planet’s overall mean temperature rises (the British Isles may get colder as the Gulf Stream is pushed further south by arctic melt water).  Of course human-generated carbon is only one prong of the global environmental crisis, and a symptom of existing imbalance.  

Human beings are also killing off of our fellow species at a rate that will soon surpass the Cretaceous die-off and is the sixth great mass extinction of the Earth’s natural history.6 This is a fact that is horrifying insofar as it can be quantified at all—the numbers here are softer and more conjectural than the precise measurements of chemistry and temperature and estimates may well be on the low side.  The true number of lost species will never be known as unidentified species are driven into extinction before they can be described and catalogued by science.7  But as a general statement, the shocking loss of biodiversity and habitat is uncontroversial in the communities that study such things seriously.  Human history has shown itself to be a brief and destructive branch of natural history in which we have become the locusts or something much, much worse than such seasonal visitations and imbalances. 

As a friend of mind observed, those who persist in their fool’s paradise or obstinate cynicism for short term gain and who still deny the reality global climate change must ultimately answer two questions: 1). What evidence would you accept that human are altering the global environment?  2). What if you are wrong in your denials? 

From my own experience, I have found that neither fact-based reason nor the resulting cognitive dissonance it instills change many minds once they are firmly fixed; rationalization and denial are the twin pillars of human psychology and it is a common and unfortunate characteristic of our species to double-down on mistaken beliefs rather than admit error and address problems forthrightly.  This may be our epitaph.

And now the book reviews.

John Gray: The “Rapacious Primate” and the Era of Solitude

Straw Dogs, Thoughts on Humans and other Animals, London: Granta, 2002 (paperback 2003), 246 pages.

Around 2007, a friend of mine recommended to me some books by the British philosopher and commentator, John Gray.  On issues of human meaning/non-meaning vis-à-vis the amorality of nature, Gray, an urbane lecturer, comes off in this book as a two-fisted scrapper, a dark realist who loves to mix things up and disabuse people of moral fictions and illusions.  Straw Dogs is not specifically on the world environmental crises, but rather on human nature. Ecological degradation obviously figures into his thesis prominently.

Straw Dogs is a rough-and-tumble polemic—Nietzsche-like in tone and format but Schopenhauer-like in its pessimism. It is a well-placed barrage against humanism in which the author, painting in broad strokes, characterizes his target as just another delusional faith, a secularized version of Christianity. Where Western religion promises eternal salvation, humanism, as characterized by Gray, asserts an equally unfounded faith in terrestrial transcendence: the myths of social progress, freedom of choice, and human exceptionality as a construct, an artificial distinction that “unnaturally” separates humans from the rest of the living world.  Even such austere commentators as Nietzsche (and presumably the existentialists that followed)—far from being nihilists—are in Gray’s assessment latter-day representatives of the Enlightenment, perhaps even Christianity in another guise, trying to keep the game of meaning and human uniqueness alive.                                                                                                                   

Gray begins this book with a flurry of unsettling assertions and observations.  In the preface to the paperback edition, he writes:

“Most people today think that they belong to a species that can be the master of its own destiny. This is faith, not science.  We do not speak of a time when whales and gorillas will be masters of their destinies. Why then humans?”

In other words, he believes that it is a human conceit to assume that we can take charge of our future any more than any other animal and that this assumption is based on an erroneous perception of human exceptionality by type from the rest of the natural world.  At the end of this section, he writes:

“Political action has become a surrogate for salvation; but no political project can deliver humanity from its natural condition.” 

Here then is a perspective, so conservative, so deterministic and fatalistic about workable solutions to the bigger problems of human nature as to dismiss them outright or to even entertain them as a possibilities.  This is not to say that he is wrong.

But it is really in the first few chapters that Gray brings out the big guns in explaining that not only can we not control our fate, but that we have through our very success as an animal, become a Juggernaut, a plague species that is inexorably laying waste to much of the living world around us.  Interestingly he does not lay this at the feet “of global capitalism, industrialization, ‘Western civilization’ or any flaw in human institutions.” Rather “It is a consequence of the evolutionary success of an exceptionally rapacious primate.  Throughout all of history and prehistory, human civilization has coincided with ecological destruction.”  Our trajectory is set by biological destiny rather than economic, political, social, flaws or technological excess. We are damned by the undirected natural process that created and shaped our species and are now returning the favor upon nature by destroying the biosphere.  

We destroy our environment then because of what we are (presumably industrial modernity is merely an accelerant or the apex manifestation of our identity as a destroyer).  We have by our very nature become the locusts, and destruction is part and parcel of who we are rather than a byproduct of a wrong turn somewhere back in our history.  Destruction and eventually self-destruction is in our blood, or more correctly, in the double helix spirals and the four-letter code of our DNA manifested in our extended phenotype.  The selfish gene and self-directed individual coupled with the altruism of group selection form a combination that will likely lead to self-destruction along with the destruction of the world as it was.

With the force of a gifted prosecutor presenting a strong case, and with all of the all of the subtlety of the proverbial bull in a china shop, Gray observes that we are killing off other species on a scale that will soon rival the Cretaceous die-off that wiped out the dinosaurs along with so much else of the planet’s flora and fauna 65 million years ago.  He points to early phases of human overkill and notes that most of the mega fauna of the last great ice age, animals like the woolly mammoth and rhinoceros, the cave bear, and saber tooth cats, North American camels, horses, lions, mastodons (about 75% of all the large animals of North America), and almost every large South American animal—not-so-long-gone creatures that are sometimes anachronistically lumped together with the dinosaurs and trilobites as distantly pre-human—were likely first wave casualties of modern human beings (there was a vestigial population of mammoths living on Wrangel Island until about 3,700 years ago, or about 800 years after the Pyramids of Giza were built).8  Quoting James Lovelock, Gray likens humans to a pathogen, a disease, a tumor, and indeed there is a literal resemblance between the light patterns of human settlement as seen from space and naturalistic patterns of metastasizing cancer.

Gray concedes “that a few traditional peoples love or lived in balance with the Earth for long periods,” that “the Inuit and Bushman stumbled into way of life in which their footprints were slight.  We cannot tread the Earth so lightly. Homo rapines has become to numerous.”  He continues:

“A human population of approaching 8 billion can only be maintained by desolating the Earth.  If wild habitat is given over to cultivation and habitation, if rain forests can be turned into a green desert, if genetic engineering enables ever-higher yields to be extorted from the thinning soils—then human will have created for themselves a new geological era, the Eremozoic, the Era of Solitude, in which little remains on the Earth but themselves and the prosthetic environment that keeps them alive.”

According to Gray then, wherever humans live on a scale of modern civilization (or any scale above the most benign of hunter-gatherers) there will be ecological degradation—that there is no way to have recognizable civilization without inflicting harm to the environment.  Similarly “green” politics and “sustainable” energy initiatives are also pleasant but misleading fictions—self-administered opiates and busy work to assuage progressives and Pollyannas beset with guilty consciences.  To Gray environmentalism is the sum of delusions masquerading as real solutions and high-mindedness. Gray clearly believes what he is saying and is not just trying to provide a much-needed shaking up of things by making the truth more clear than it really is.  Regardless, his position seems to be a development of the adage that given time and opportunity, people will screw up everything.

Gray’s dystopian future of a global human monoculture, his “green desert” or Eremozoic (“era of solitude”9)  finds parallel expression in the term Anthropocene, or the geological period characterized by the domination of human beings.  Adherents to this concept span a wide range from the very dark to the modestly optimistic to the insufferably arrogant to the insufferably idealistic.

Regardless of which term we use, Gray doesn’t think that things will ever get that far.  Sounding as if he is himself were beginning to embrace a historical narrative of his own, he writes that past a certain point, nature (understood as the Earth’s biosphere) will start to push back.  The idea is that the world human population will collapse to sustainable levels, just like an out-of-control worldwide plague of mice, lemmings, or locusts.  Like all plagues, human civilization embodies an imbalance in an otherwise more or less stable equilibrium and is therefore by its nature fundamentally unsustainable and eventually doomed (almost 20 years ago, with a population of about six billion, the human biomass was estimated to be more than 100 times greater than that of any other land animal that ever lived10).

There is of course an amoral “big picture” implication to all of this—a view of the natural world that, like nature itself, is beyond good and evil—which recognizes that sometimes large changes in natural history resulting from both gradual change and catastrophic collapse have in turn resulted in an entirely new phase of life rather than a return to something approximating the previous state of balance.  This would include the rise of photosynthesizing/carbon-trapping/oxygen-producing plants took over the world, fundamentally changing the atmospheric chemistry from what had existed before and therefore the course of life that followed.11 More on this in the discussion below on Adam Frank’s Light of the Stars..

Gray’s thesis appears to have elements of a Malthusian perspective and the Gaia hypothesis of James Lovelock and Lynn Margulis.  It is unclear how Gray can be so certain of the inevitability of such dire outcomes—that humans lack any kind of moderation and control and that nature will necessarily push back (could humankind, embracing a greater degree of self-control, be an agent of the Gaia balancing mechanism?).  Such certainty seems to go beyond a simple extrapolation of numbers and the subsequent acknowledgment of likely outcomes, into an actual deterministic historical narrative—an eschatological assertion like the ones he takes to task in his excellent 2007 book Black Mass.  My sense is that Gray will likely be right.

As a theory then, I believe that the flaw in Gray’s thesis lays in its deterministic inevitability, its necessity, its fatalism, when we do not even know whether the universe (or the biosphere as a subset) is deterministic or indeterministic.  We may very well kill off much of the natural world and ourselves with it, but this may have less to do with evolutionary programming or biological determinism than with inaction or bad or ineffective decisions in regard to the unprecedented problems that face us.  I also realize that if we fail, this will be the ultimate moot point in all of human history. 

The Gaia hypothesis (which is a real scientific theory) may turn out to be true. Perhaps nature will protect itself like a creature’s immune system by eradicating a majority of what William C. Bullitt called “a skin disease of the earth.”12  The problem is that this predictive aspect of the theory—really an organon or meta-theory—purports describe a phenomena that can not be tested (although the extinction or near-extinction of humankind would certainly corroborate it). On the other hand, the regulation of atmospheric gases by the biosphere is real and testable.13

Let me clarify the previous paragraph: if the Gaia hypothesis maintains that the Earth’s biosphere is self-regulating (e.g. maintaining atmospheric oxygen levels at a steady state in resisting the tendency in a non-living system toward a chemical equibrium), then this is a theory that can be accounted for by physics (e.g. James Lovejoy’s “Daisyworld” thought experiment) and is not teleology or metaphysics (See: Adam Frank, Light of the Stars, 129, see also Lynn Margulis, Symbiotic Planet, 113-128).  If we hypothesize that there are elements of the biosphere that will act like a creature’s immune system in eradicating the surplus human population, then we have possibly ventured into the realm of metaphysics.

As a practical matter, any successful, intelligent, willful animal that can eradicate its enemies and competitors and alter its environment (both intentionally and unintentionally) will run afoul of nature. Edward O. Wilson has expressed this idea.  But is this a tenet of common sense?  Logical necessity?  Biological or physical determinism?  And as a small subset of nature, is it even possible for us to know what “necessity” is for nature?  Are we condemned to extinction due to a lack of ability to adapt to changes increasingly of our own making, arising from our own nature? And is our extinction is made inevitable by a surfeit of adaptability and successful reproduction (i.e. the very qualities that allowed us to succeed)?  Does success at a certain level guarantee failure? Is balance possible in such a species?  What of balance and creatures whose numbers held in sustainable check in a steady state for tens, and in some cases hundreds of millions of years in relatively stable morphological form—the scorpion, shark, crocodile, and dragonfly—who live long enough to diversify slightly or change gradually along with conditions in the environment?  What of animals who have improved their odds (cats and dogs come to mind) through intelligence and a mutually beneficial partnership and co-evolution with humankind?

Gray says that we cannot control our fate, and yet our very success and perhaps our downfall is the result of being able to control so much of our environment (the eliminating or natural enemies from animal competitors to endemic diseases, to the regulation of human activity and production to guarantee water, food, energy, etc.).  Any animal that can eliminate or neutralize the counterbalances to its own numbers will result in imbalance, and unchecked imbalance leads to tipping points.14  It is ironic that Gray lays all of this at the feet of the human species as the inevitable product of our animal nature, as the result of biological and even moral inevitability, and yet I detect a tone of judgment about it all as if we are somehow to blame for who we are, for characteristics that Gray believes are intrinsic and unalterable. 

Gray, then, is a bleak post-humanist who apparently adheres to humanist values in his own life (indeed, as Camus knew, a view espousing a void of deontological values must lead either to humanism or nihilism, and nobody lives on a basis of nihilism).  In an interview given with Deborah Orr that appeared in the Independent he states that “[w]e’re not facing our problems.  We’ve got Prozac politics”—an odd claim given the supposed inevitability of those problems and the impossibility of fixing them. It is an odd statement for a behavioral determinist.  Moreover, although he powerfully criticizes the proposed solutions of others, his own solutions are vague and unlikely to remedy the situation (not that that is their purpose).15  When he writes on topics outside of his areas of fluencey (artificial consciousness, for instance), his ideas are not especially convincing.16

Of course in a literal biological sense Gray is right about a lot: humans are just another animal and to assert otherwise is to create an artificial distinction.  But even here, the demarcation between organic and artifice/synthetic (meaning the product of the human extended phenotype—a “natural category”) has to be further defined and is a useful distinction (“altered,” “manmade,” or “human-modified nature” may be a more constructive, if inelegant refinements of the “artificial” or “unnatural”).  After all are domesticated animals “natural,” are feral animals “wild” in conventional usage, and does calling everything “natural” add clarity to finer delineations?

Gray frames his discussion as an either/or dichotomy of the utopian illusion of progress versus inevitable apocalyptic collapse.  But what if the truth of the matter is not this cut-and-dried?  Perhaps we cannot be masters of our fate in an ultimate sense, but can we manage existing problems and new ones as they arise even from past solutions?  Although we have in past more modest instances, here the devil lays in both the scale and details, and the details may include a series of insurmountable hobbles and obstacles. 

In Gray we may not be far off from Roy Scranton’s prescription of acknowledging defeat, and personal decisions about learning to die in a global hospice, but we are not there yet.  The chances of redeeming the situation may be one in 100 or one in 1,000, but there is still a chance.  As a glorified simian—a “super monkey” in the words of Oliver Wendell Holmes, Jr.17(the flipside of Gray’s homo rapiens)—we are audacious creatures who must take that one chance, even if it turns out to be founded on delusions.  “If not gorillas and whales,” Gray asks “why then humans?”  Because we are natural-born problem solvers; because gorillas and whales have never put one of their own on the Moon. Why humans?  Because of the New Deal, the industrial mobilization during the Second World War, the Manhattan Project, Marshall Plan, and the Apollo Moon Project are items of the historical record and not matters of faith.

Far from seeing human civilization in terms of enlightened progress, we must come to regard it as managing ongoing damage control and the snuffing of fires as they spring up and then managing spinoff problems as they emerge from previous solutions—mitigating rather than just adapting or surrendering.  It will involve an unending series of brutal choices and a complete reorientation of the human relationship with nature and whose only appeal will be that they are preferable to our own extinction and inflicting irreparable damage on the world of which we are a part.

If Gray is simply making a non-deterministic Malthusian case that, unaltered, human population growth will likely result in a catastrophic collapse, we could accept this as a plausible and perhaps even a very likely hypothesis.  If on the other hand he is saying that the Earth is itself a living being and will necessarily push back against human metastasis through a sort of conscious awareness or physical law-like behavior, then the truth is yet to be seen.

What then is the practical distinction between deterministic inevitability of Gray’s (Lovelock/Margulis’s) Gaia model and the practical inevitability of a Malthusian model (Although Malthus himself hits at something very much like the Gaia thesis: he refers to famine as “the most dreadful resource of nature… The vices of mankind are active and able ministers of depopulation.  They are the precursors in the great army of destruction, and often finish the dreadful work themselves.  But should they fail in this war of extermination, sickly seasons, epidemic, pestilence, and plague, advance in terrible array, and sweep off their thousands and tens of thousands.  Should success still be incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow, levels the population with the food of the world” (Malthus, p. 61)?  The answer is that the later is inevitable only if conditions leading toward a collapse remain unaltered, and therefore allows for the possibility of a workable solution where the inevitable model does not.  As that greatest of Malthusian-antagonists-turned-Victorian-progressive-protagonist from English literature, Ebenezer Scrooge, in all of his Dickensian wordiness duns the Ghost of Christmas Present: 

“Spirit, answer me one question: are these the shadows of things that will be or the shadows of things that may be only?  Men’s actions determine certain ends if they persist in them.  But if their actions change, the ends change too.  Say it is so with what you show me… Why show me this if I am past all hope?”18 

In the words of another English writer also given to overwriting, “aye, there’s the rub.”  Perhaps it is not too late for humankind to change its ways, to regard writers on the environment to be latter day analogs of the ghosts of Christmas Present and Future.  It should be noted that under Malthus, there are survivors once the excess is eliminated.19                                                                                                                  

If Gray is right, some have argued that we might as well keep on polluting and degrading the environment, given that destruction flows from unalterable human nature and therefore self-extermination is inevitable.  Tiny Tim will go to an early grave no matter what changes and accommodations Scrooge makes in a closed universe.20  As Gray himself writes, “[p]olitical action has come to be a surrogate for salvation; but no political project can deliver humanity from its natural condition.”  Bah Humbug.

Of course whether the impending collapse of world civilization is deterministically certain or only merely certain in a practical or probabilistic sense is ultimately irrelevant, given that either way it will likely come to pass.  The question here is whether we will catastrophically implode as just another plague species, of if we are able to manage a controlled decline in population to sustainable steady state (and do the same with carbon even earlier).  It is the difference between an uncontrolled world of our own making and one in which we shape events piecemeal toward suitable incremental goals toward reaching a steady state.  It is the difference between a slight chance and no chance at all. 

Although I am not sold on the idea that biology is destiny—even though we can never untether ourselves from nature our or own nature, we can perhaps rise above our brute character with moderation and reason—I do agree that past a certain point, if we kill of the natural world, we will have killed ourselves in the process.  There will never be a human “post-natural” world.

One could argue that the audacity, hubris, and capacity for innovation that allowed us to take over the world are value-neutral qualities that could be reoriented toward curbing our own success.  One wonders what value Gray credits to human consciousness and of human ideas other than an admission that science and technology (notably medical and dental) progresses.  One senses that he sees our species as not worthy rather than as tragic.  

Darwinian success may lead to Malthusian catastrophe just as a human apocalypse could mean salvation for the rest of the living world. The over-success of the human species is the result of natural drives to survive, to improve our situation, and eliminate the competition (as well as an excellent blueprint—our genes—and out nature which is divided between the individual and the group.  See E.O. Wilson The Meaning of Human Existence).  More specifically, it is these powerful tools served us so well in making us the biological success we have become—and that survival is the conscious or unconscious goal of animals—then it is an artificial distinction to claim that we could not curtail this success with the same tools.

In the interest of full disclosure, I must say that I don’t share Gray’s apparent contempt for humanism or the Enlightenment.  His own ideas stand on the shoulders or in proximity to these ideas and trends or would otherwise not exist without them.  As a friend of mine observed, if we think of the natural world as a living organism (as Gray might), then, by way of analogy, human beings might be regarded as the most advanced, most conscious neurons of the brain of the creature.  The fact that we have become a runaway project does not make us bad (even if we accept Gray’s premise that humans destroy nature because of who we are, we can hardly be blamed for being who we are).  The fact that brain cells sometimes mutate into brain cancer hardly makes brain cells bad.21

One problem with writing about nature is that the living world is like a great Rorschach test into which we read or project our beliefs and philosophy al la mode into our observations and lessons drawn from it.  Emerson and Thoreau are mystics of a new-agey pantheism “as it exists in 1842.”  Malthus is a conservative economist and moralist wedged between the Enlightenment he helped to kill and the naturalism and modernity he helped usher in.  Darwin is a reluctant naturalist keenly aware of the importance of his great idea but shy of controversy and invective.  In Pilgrim at Tinker Creek, Annie Dillard is a perceptive and precociously odd woman-child who likes bugs and is endowed with a poet’s genius for the written word in reporting what she sees with such brute honesty that she overwhelms herself.22  Gray fluctuates from neo-Hobbesian realist to a Gaia fatalist, to a Schopenhauer-like pessimist.

To be fair, Straw Dogs is probably not Gray’s best book (see Black Mass, for instance).  In the end, there is something a little facile, a little shallow about the swagger, the pose he strikes here—the professional doom-and-gloomer on a soap box to frighten the fancy folk out of their smug orthodoxy.  Although there are few things more dangerous than a true believer, one comes away from Gray wondering if he believes all of his own ideas.  This is not to say that there are not powerful ideas here or that they are wrong. My gut feeling is that the book may one day be regarded as prophecy.   

Roy Scranton and Nietzsche’s Hospice

Learning to Die in the Anthropocene, City Lights Books, 2015, 142 pages.                                          

Another of the more eloquent voices on the dark side of the Anthropocene perspective is Roy Scranton.  A soldier and scholar who has glimpsed the ruined future of humankind in the rubble and misery of Iraq, Scranton believes that it is simply too late to save the environment.  The time for redemption has passed. Full stop. 

His response therefore, is one of acceptance and adaptation, that as members of a myth-making species, people should acknowledge that the world that we knew is finished and we should let it die with courage and dignity in the unfolding Anthropocene.  In this prescription he combines Nietzsche’s premise of living on one’s own terms with the Jungian preoccupation with myths.  In some respects, he is the opposite of Gray in that he embraces humanism and mythmaking and places much of the blame at the feet of capitalism rather than our animal nature.  (Scranton 2015, 23-24, Gray 2013, 112-118)                

I found that his two most revealing pieces on this topic are his hard-hitting article “We’re Doomed.  Now What?” and his book Learning to Die in the Anthropocene, both from 2015.

In some respects Scranton goes beyond Gray by asserting that things are already too far gone as a matter of fact, and that all that remains is to learn to let civilization die.  Scranton is a noble, disillusioned bon vivant of the mind forced by circumstances and his own clear and unflinching perception into fatalistic stoicism. 

In Learning to Die in the Anthropocene, a grimly elegant little book in which he builds his case, Scranton acknowledges the existence of the neoliberal Anthropocene recognizing its necessarily terminal nature.  But he is speaking about the death of the human world as we know it with a general idea about how to adapt, learn, survive, and pass on wisdom in the world after.

Scranton is not as elemental as Gray and his claim is not necessarily deterministic in character (i.e. that the looming end is the result of cosmic or genetic destiny or the natural balancing of the biosphere).  He simply observes that things are too far gone to be reversed.  Where Gray places blame squarely on the animal nature of homo rapinus—“an exceptionally rapacious primate”—and not on capitalism or Western civilization—Scranton puts much of the blame, both practical and moral, at the feet of carbon-fueled capitalism, “a zombie system, voracious and sterile” an “aggressive human monoculture [that has] proven astoundingly virulent but also toxic, cannibalistic and self-destructive.” (Gray 2002 (2003), 7, 151, 184; Scranton 2015, 23).  As with Edward O. Wilson before him, he calls for a “New Enlightenment.” (Scranton, 2015, 89-109; Wilson 2012, 287-297).

For all of his insight, Sctanton does not advance grandiose theories about human nature (most of his condemnation is of economics/consumerism and the realities of power although he does believe that “[t]he long record of human brutality seems to offer conclusive evidence that both individually and socially organized violence as biologically a part of human life as are sex, language, and eating” note).  He just looks at the world around him—peers Nietzsche-like into the unfolding abyss—and does not blink.  Honest, sensitive, and intelligent he simply tells the truth as he sees it.  He accepts the inevitable and without illusion or delusion.  The time for redemption has passed, and we must learn to let our world die with whatever gives us meaning.

As with Gray, Scranton may prove to be right as a practical matter and believes the end to be a matter of empirical fact rather than the unfolding of biological, historical, or metaphysical necessity.  He speaks about learning to die, but his book is only palliative in tone as regards capitalistic civilization.  He states that:

“The argument of this book is not that we have failed to prevent unmanageable global warming and that the global capitalist civilization as we know it is already over, but that humanity can survive and adapt to the new world of the Anthropocene if we accept human limits and transience as fundamental truths, and work to nurture the variety and richness of our collective cultural heritage.  Learning to die as individuals means letting go of our predispositions and fear.  Learning to die as a civilization means letting go of this particular way of life and its ideas of identity, freedom, success, and progress.  These two ways of learning to die come together in the role of the humanist thinker: the one who is willing to stop and ask troublesome questions, the one who is willing to interrupt, the one who resonates on other channels and with slower, deeper rhythms.” (Scranton 2015, 24)

He is speaking of the death of the world as we knew it and the individual lives we knew.  But he is also speaking of adapting and emerging in a time after with a universal humanism shorn of the assumptions of a failed world.  In this sense, he is telling us what to pack for after the storm, both for its own sake, and perhaps to learn from it and do better next time.  He writes:

“If being human is to mean anything at all in the Anthropocene, if we are going to refuse to let ourselves sink into the futility of life without memory, then we must not lose our few thousand years of hard-won knowledge accumulated at great cost and against great odds. We must not abandon the memory of the dead.” (Scranton 2015, 109)

In this sense Scranton is like a fifth century Irish monk carefully preserving civilization at the edge of the world, on the precipice of what might be the end of civilization, as well as an Old Testament prophet speaking of an eventual dawn after the dark of night, the calm or chaotic altered world after the tempest.  As with the early Irish monks and similar clerical scribes writing at the height of the Black Death of the 14th century, we do not know whether or not we face the end of the world. (Tuchman 1978, 92-125). 

Although I do not agree with the Anthropocene perspective of surrender and adaptation as long as there is a chance to avoid or mitigate a global disaster, there is much to like about Scranton’s perspective here.

In “We’re Doomed. Now What?” he goes even farther than the idea of the heroic humanist thinker and becomes something like Emerson’s all-perceiving eyeball, or a kind of pure empathetic consciousness.  Relying heavily on the perspectivism of Nietzsche, Scranton says that human meaning is a construct.  But meaning must be tied to—exist in—proximity to perceived reality, and beyond meaning is truth (Tarski 1956 (1983), 155).  Perspectivism is a kind of relativistic but intersubjective triangulation for a more complete (objective? Popper, OK) picture (Peirce).  From the accessing of truth, we may devise a more informed and less delusional kind of meaning.

He writes that rather than die with our provincial illusions intact,

“We need to learn to see with not just our Western eyes but with Islamic eyes and Inuit eyes, not just human eyes but with golden-cheeked warbler eyes, Coho salmon eyes, and polar bear eyes and not even with just eyes but with the wild, barely articulate being of clouds and seas and rocks and trees and stars.”

In other words, this is a kind of reverse-phenomenology: rather than begin without assumptions, we should begin with all perspectives.  As sympathetic as I am with all the living things he mentions, beyond a general empathy, to see things through their eyes is an impossibility.  I too feel a kind of pan-empathy, only without the illusion (a Western illusion) that I can truly see things as they do.  And beside, Scranton does not mention what good it would do even if it were possible.  His idea here is reminiscent of Edward O. Wilson’s notion of biophilia. (Wilson 1984). 

It seems odd that Scranton believes that technology cannot save us from the climate crisis, and yet empathy and philosophy will save us in a time after.  It may work for individuals—and certainly for thinking people, like historians—but it is not a realistic prescription for an overpopulated world in crisis.  Perhaps he would benefit from some of Gray’s realism about human nature.

Of course even without hope there are also good reasons to act with dignity in the face of inevitable demise.  This of course is a key tenet of the Hemingway world view: that in a world without intrinsic meaning, we can still come away with something if we face our fate with courage and dignity.  Nietzsche’s prescription is even better:  if we are to live our lives in an eternal sequence of cycles, then we should attempt to conduct our lives in such way so as to make them monuments to ourselves, to eternity, for eternity.  We do this by living in such away as would best reflect our noble nature.  Although modern physics has obviously cast doubt on the idea of eternal recurrence, the idea also holds up equally well in the block universe of Einstein (and Parmenides and Augustine) in which the past and future exist forever as a continuum on spite of the “stubbornly persistent illusion” of the present moment.  Our lives are our eternal monuments between fixed brackets, even in a dying world, and although Nietzsche and Einstein were both determinists, we must (paradoxically) act as if we have choice. 

Camus believes that in a world without deontological values, we assert our own and then try to live up to them knowing that we will fail.  A.J. Ayer inverts this with the idea that life provides its own meaning in a similar sense that our tastes choose us more than we choose them.  If Ayer is right, then perhaps we arrive back at determinism: we have no choice but to immerse ourselves in personal myths as they select us.  We have a will, but it is a part of who we are, and who we are is given.

Of course one could ask how are we to affirm what makes us distinctively human in a positive sense when that which characterizes us distinctly human as a plague species continues to strangle the biosphere?  What is meaning—aesthetic, intellectual or otherwise—in a dying world?  Do we withdraw into our myths, our archetypes as natural-born myth-makers or has this been a part of the problem all along?  

To this I would only add what might be called “The Parable of the Dying Beetle.”  When I was a child, I came across a beetle on the sidewalk that had been partially crushed when someone stepped on it.  It was still alive but dying.  I found a berry on a nearby bush and put it in front of the beetle’s mandibles and it began to eat the fruit.  There may have been no decision—eating something sweet and at hand was presumably something the beetle did as a matter of course.  It made no difference that there was no point in a dying beetle nourishing itself any more than did my offering it the berry to begin with.  It was simply something that the beetle did.  Perhaps it is the same with humans and myth-making: it is what we do, living or dying. After all, real writers write not to get published. They write because they are writers.

The Inner Worlds and Outer Abyss of Roy Scranton

We’re Doomed: Now What?, New York: Soho Press, Inc., 2018.

Scranton’s long awaited new book is a collection of essays, articles, reviews, and editorials.  It begins with a beefed-up version of his New York Times editorial “We’ re Doomed. Now What? [https://opinionator.blogs.nytimes.com/2015/12/21/were-doomed-now-what/] —which distills some of the themes of his earlier book, Learning to Die in the Anthropocene.  The new book is organized into four sections.  The first is on the unfolding climate catastrophe.  The second is on his experiences of the war, followed by “Violence and Communion” and “Last Thoughts.”  Given the fact that Scranton’s most conspicuous importance is as a writer—as a clear-sighted prophet of the environment—this arrangement makes sense, even though his vision of the future comes from his experience as a combat infantryman and his own sensitive and perceptive nature.

When Scranton limits himself to his own observations and experiences, he is powerful, poetic—the Jeremiah of his generation and possibly the last Cassandra of the Holocene, the world as it was.  He is a writer of true genius and a master storyteller of startling eloquence who writes multilayered prose with finesse and grace.  If there is any flaw, it may be a slight tendency toward overwriting, but this is an insignificant aesthetic consideration.  He also tends to assert more than reveal, but then he is not a novelist.

When he listens to his own muse or discusses other first-person commentators on war, he is magnificent.  When he references great philosophers, he is good—earnest but didactic, his interpretations more conventional.  When he references recent philosophers, especially postmodernists like Derrida, Foucault, and Heidegger, he is only slightly more tolerable than anybody else dropping these names and their shocking ideas (one can only hope that he has read some of Chomsky’s works on scientific language theory, but I digress).  I also take issue with some of his interpretations of Nietzsche, but these are the quibbles of a philosophy minor and the book is mostly outstanding and should be read.                                           

His writing on war is insightful both taken on its own and chronologically as a preface to his writing on the environment.  He is not only a keen observer who knows of what he speaks, he is completely fluent in the corpus of experience-based war literature.  If Scranton turns out to be wrong about the terminal nature of the environmental crises, his writing on war will likely endure as an important contribution to the canon in its own right.  In my library, his book will alternate between shelf space dedicated to the environment and somewhere in a neighborhood that includes Robert Graves, Wilfred Owen, Siegfried Sassoon, Ernst Junger, Vera Britton, James Jones, Eugene Sledge, and Paul Fussell.  The essays on war are reason enough to buy the book.  Certainly every Neocon, every Vulcan or humanitarian interventionist whose first solutions to geopolitical problems in important regions of the developing world is to drop bombs or send other people’s children into harm’s way should read all of Scranton’s war essays.

There is perhaps one substantial point of contention I have with this book, and I am still not sure how to resolve it, whether to reject my own criticism or to embrace it.  Scranton begins this collection with his powerful “We’re Doomed. Now What?” but ends it with an essay, “Raising a Daughter in a Ruined World,” that appeared in the New York Times around the same time that the new book was released during the summer of 2018.  Regardless of whether or not one agrees with its thesis, there is an uncompromising purity of vision in the earlier book and most of the essays of the new one.  

In the last essay, Scranton writes with his characteristic power, insight, and impossibly good prose.  But then he seems to pull a punch at the end.  Sure we’re screwed and there is little reason for hope, but here the nature of the doomsday scenario is a little less clear, less definitive: does the near future hold the extinction of our species along with so many others, or is just some kind of transformation?  Is the world merely ruined or about to be destroyed?  To be fair, nobody knows how bad things will be beyond the tipping point (and there are parts of his earlier book which also suggest transformation).  If he begins the new book with a knockout hook, he seems to end it with a feint that, while not exactly optimism, is something less than certain death—a vague investment in hope with real consequences.

I get it: kids force compromises and force hope along with worry and his intellectual compromise (tap dance?) may be that there is a glimmer of hope.  Even though the abyss looks into you when you look into it, most of us would blink at least once, even in a world that may (or may not) be dying.

He rightly asks “[w]hy would anyone choose to bring new life into this world?” and then spends part of the essay rationalizing an answer that is very much in keeping with the theme of the myths of personal meaning he prescribes in Learning to Die in the Anthropocene.  Kids force hope, but who forced, or at least permitted the child’s existence to begin with?  It is none of my business, except that Scranton is a public commentator who brought up the point publicly and then attempt to explain.  The problem is that the new creature did not ask to be a part of someone’s palliative prescription.  For while there are many shades of realism, one cannot be half a fatalist any more than one can be half a utopian.  Or as a friend of mine observed, “[T]he problem with taking responsibility for bringing a child into the world is that it precludes rational pessimism.”  

The more general problem is that this acknowledgment of possible hope forces him from a less compromising position in his earlier book and most of his articles in the new one to conclude with a somewhat more conventional and less interesting Anthropocene position—one that admits that the world is ruined (i.e. too far gone to be saved through robust mitigation), and so rather than try to reverse the damage we must adapt.  In reviewing his previous book, I noted that a fatalistic point of view risks premature surrender, but here my criticism is more with his newfound rationale for solutions than with his all-too-human flinch per se.  

Learning to Dies in the Anthropocene gives us a basis for a personal approach to the world’s end; in “Raising a Child in a Doomed World,” (https://www.nytimes.com/2018/07/16/opinion/climate-change-parenting.html) Scranton states that individual solutions—other than suicide on a mass scale (although one can only wonder what kind of greenhouse gases billions of decomposing corpses would produce)—cannot be a part of the solution in terms of fixing the problem.  Even with the possibility of premature surrender, the earlier, more personalized perspective is more interesting than the new one with non-forthcoming large scale prescriptions.  He throws out a few of the solutions common to the young (global bottom up egalitarian, global socialism), but has no illusions about the feasibility of these.

Even here there is honesty; he does not pretend to know how to fix things.  And so (during an August 8, 2018 reading and book signing at Politics & Prose in Washington, D.C.), he lapses into generalities when questioned: “organize locally and aggressively,” perhaps there will be a world socialist revolution (which he openly concedes is utopian, the realm of “fantasy,” yet at another point states that it “now seems possible”), do less and slow down (although in the last essay, he states that personal approaches can’t work), and learn to die (getting back to his previous theme).  

A couple of other minor points: the book’s title seems a bit too stark and spot-on for such a serious collection and is more in keeping with the placard of the archetypal street corner prophet of New Yorker cartoons.  Similarly, the cover illustration—the Midtown Manhattan skyline awash behind an angry sea—struck me as being a little tabloidesque, but what it is they say about judging a book by its cover?

Jedediah Purdy and the Democratic Anthropocene

After Nature, A Politics for the Anthropocene, Harvard University Press, 2015, 326 pages.   

Another of the more articulate voices under the umbrella of Anthropocene perspectives is Jedediah Purdy, now a professor of law at Columbia University Law School after 15 years at Duke.  Purdy is a prolific writer and this book—now four years old—is by no means his most recent statement on the environment (for an example of his more recent writing, see https://www.nytimes.com/2019/02/14/opinion/green-new-deal-ocasio-cortez-.html).

After Nature is a wonder and a curiosity.  In the first six chapters he provides an intellectual history of nature and the American mind that is nothing short of brilliant.  His writing and effortless erudition are exceptional.  He is a truly impressive scholar.  This part of his book is intellectual history at its best. 

Purdy’s approach is to use the law as a reflection of attitudes toward the natural world.  Through a legal-political lens, he devises the successive historical-intellectual categories of the providential, romantic, utilitarian, and ecological, interpreting nature as the wilderness/the garden, pantheistic god, natural resources, and a living life support system to be tamed, admired, worshiped, managed, and preserved. 

These interpretive frames in turn characterize or “define an era of political action and lawmaking that left its mark on the vast landscapes.”  On page 27, he states that these visions are both successive and cumulative, that “[t]hey all coexist in those landscapes, in political constituencies, and laws, and in the factious identities of environmental politics and everyday life.”  He acknowledges that all of these perspectives exist in his own sensibilities.  In my experience, one is unlikely to come across better fluency, depth of understanding, and quality of writing on this topic anywhere, and one is tempted to call it a masterwork of its kind.

It is therefore all the more surprising that after such penetrating analysis, historical insight, and eloquence in describing trends of the past, his prescription for addressing the environmental problems of the present and future would go so hard off the rails into a tangle of unclear writing and a morass of generalities and unrealistic remedies.  It also strikes one as odd that such a powerful and liberal-minded commentator would embrace his particular spin on the Anthropocene perspective, given some of its implications.

In Chapter 7 “Environmental Law in the Anthropocene,” Purdy introduces some interesting, if not completely original ideas like “uncanniness”—the interface with other sentient animals without ever knowing the mystery of what lies behind it, of what they feel and think.  Before this, he discusses something calls the “environmental imagination”—an amalgam of power (“material”) interests and values.  After this he ventures into more problematic territory in his sub-chapter “Climate Change: From Failure to New Standards of Success.”

Purdy rejects the claims of unnamed others that climate change can be “solved” or “prevented” (these are his cautionary quotation marks, although it is unclear who he is quoting).   He writes about the “implicit ideas” of unidentified “scholars and commentators” (my quotation marks around his ideas) and their “predictable response” of geo-engineering to rapidly mounting atmospheric carbon levels (“a catch-all term for technologies that do not reduce emissions but instead directly adjust global warming”).  Again, I am not sure to whom he is referring here. Most people I know who follow environmental issues favor a variety of approaches to include the production and reduction of carbon production.

According to Purdy, this perspective begins with “pessimism” and the observation that “we are rationally incapable of collective self-restraint.”  This is reasonable enough, and Purdy recognizes that spontaneous self-restraint on a global scale has not been forthcoming.  Indeed it is hard to imagine how such collective action would manifest itself on such a massive scale short of a conspicuous crisis of a magnitude that would likely signal the catastrophic end of things as we know them (e.g. if we woke up one day and most of the coastal cities of the world were under a foot of water).  If this kind of awareness of a crisis was possible at a point where it was not too late to mitigate the crises, it could only be harnessed through the top-down efforts of states acting in concert.

With self-restraint not materializing, the “pessimism” of the environmental straw man switches to “hubris.” And both of these descriptive nouns then “take comfort” (just like actual people or groups of people in a debate) in an either/or conclusion “that if we fail to ‘prevent’ climate change or ‘save’ the planet from it then all bets are off; we have failed, the game is up.  This threat of failure and apocalypse then results in the “next step” of ‘try anything now!’ attitude of geo-engineering.” 

From here he concludes that “[b]oth attitudes manage to avoid the thought [idea] that collective self-restraint should be a part of our response, perhaps including refraining from geo-engineering: the pessimism avoids that thought by demonstrating, or assuming, that self-restraint would be irrational and therefore must be impossible; and the hubris avoids it by announcing that self-restraint has failed (as it had to fail ‘rationally’ speaking), it was unnecessary all along anyway.”

Purdy then “propose[s] a different way of looking at it” and calmly announces that “climate change, so far, has outrun the human capacity for self-restraint” [so, the attitude of “hubris” is right then?], it is too late to save the nature as it was (“climate change has begun to overwhelm the very idea that there is a ‘nature’ to be preserved”), that we should learn to adapt.”  In the next paragraph, he states “[w]e need new standards for shaping, managing, and living well.  Familiar standards of environmental failure will not reliably serve anymore [does he mean metrics of temperature, atmospheric and ocean chemistry, and loss of habitat/biodiversity?] .  We should ask, of efforts to address climate change, not just whether they are likely to ‘succeed’ at solving the problem, but whether they are promising experiments—workable approaches to valuing a world that that we have everywhere changed.”

For a moment then, there is a glimmer that Purdy might be on to something by embracing a Popper-like outlook of experimentation and piecemeal problem solving/engineering.  The question is how to implement an approach of bold experimentation. 

My own view is that on balance, the environmentalist of recent decades have been clear-sighted in their observations and that their “pessimism” is warranted.  As with Malthus and the inexorable tables of population growth, I would contend that they are right except perhaps for their timetable.  Is the dying-off of the world’s reefs and the collapse of amphibian and now insect populations all just the pessimism and hubris of fatalistic imaginings?

How then should we proceed?  Even with the implosion of the The End of History narrative, Purdy, like so many of his generation and the younger Millennials, seems to have a child’s faith in the curative powers of democracy.  His concurrence with Nobel laureate, Amartya Sens’s, famous observation that famine has never visited a democracy appears to be as much of an uncritical Fukyama-esque cliché as the assertion that democracies do not fight each other (malnutrition on an impressive scale has in fact occurred in Bangladesh and in the Indian states of Orissa and Rajasthan—i.e. regions within a democratic system). 

Purdy then asserts a kind of democratic or good globalization in contrast to the predatory, neoliberal variety that he rightful identifies as a leading accelerant of the global environmental catastrophe.  He writes that “[p]olitics will determine the shape of the Anthropocene.”  Perhaps, but what does “democracy” mean to the millions living on trash heaps in the poorer nations of the world?  What does it mean in places like Burma, the Congo, and Libya? 

A savant of intellectual history, Purdy seems to know everything about the law and political history as a reflection of American sensibilities.  But politics and the law (like economics and the military) are avenues and manifestations of power—even when generous and high-minded, the law is about power—and one is left wondering if Purdy knows how power really works.  

In the tradition of Karl Popper’s The Open Society and its Enemies, I would contend that the primary benefits democracy (meaning the representative democracy of liberal republics), are practical, almost consequentialist in nature, rather than moral. First, it is an effective means of removing bad or ineffective leaders and a means of promoting “reform without violence;”27 Second, it should ideally provide a choice in which a voter can discern a clearly preferable option given their interests, outlook, and understanding.

The idea of a benevolent democratic genera of globalization and a “democratic Anthropocene” is reminiscent of academic Marxians of a few decades ago who waited for the “real” or “true” Marxism to kick-in somewhere in the world while either shrugging off its real-world manifestations in the Soviet Union and the Eastern Bloc, China, Cuba, and North Korea as false examples, corrupt excrescences, or else acknowledging them as hijacked monstrosities.

Whether in support of Marxism or democracy, this kind of ideological stance allows those who wield such arguments to immunize or insulate their position from criticism rather than constructively welcoming it, inviting it.  It could be argued that concepts of egalitarian democratic or socialistic globalization is to the current generation what Marxist socialism was to American idealists of a century ago.  In the early twentieth-century, majority of Americans had the realism and good sense not to accept the eschatological vision and prescriptions of the earlier trend.  As numerous writers have noted, populism is just as likely to take on a reactive character as it is a high-minded progressive ideology.  As economist Robert Kuttner and others have observed, some of the European nations whose elections were won by populist candidates can be described as “illiberal democracies.” [See Robert Kuttner, Can Democracy Survive Global Capitalism?, 267].

The fact that some of the most brilliant young commentators on the environment, like Purdy and perhaps Scranton (even with his admission that global socialism is possibly utopian)—to say nothing of veteran commentators on the political scene, like Chris Hedges (America the Farewell Tour)—embrace such shockingly unrealistic approaches, leaves one with a sense of despair over the proposed solutions as great as that with the crises themselves.  It is like pulling a ripcord after jumping out of an aircraft only to find that one’s parachute has been replaced with laundry. 

To be fair, nobody has a solution.  Edward O. Wilson has lamented that humans have not evolved to the point where we can see the people of the world as a single community.  Even such a world-historical intellect as Albert Einstein advocated a single world government. [See Albert Einstein “Atomic War or Peace” in Out of My Later Years, 185-199].  If the proliferation of nuclear weapons and the possibility of the violent destruction of the world could not force global unity as a reality, what chance do the environmental crises have?  By the end of 1945, everybody believed that the atomic bomb existed while today, powerful interests continue to deny the realty of the climate crises. As George Kennan observes, the world will never be ruled by a single regime (even the possibility that it will be ruled entirely under one kind of system seems highly unlikely).  Unfortunately, he will probably be right.

Purdy rightfully despises the neoliberal Anthropocene wrought by economic globalization.  But perhaps this is the true nature of globalization: aggressive, expansionistic, greed-driven, blind to or uncaring of its own excesses, and de facto imperialistic in character.  William T. Sherman famously observes that “[w]ar is cruelty, and you cannot refine it.”28  So it is with globalization, whether it be mercantilist, imperialist, neoliberal, or some untested new variety.

Globalization is economic imperialism and it likely cannot be reformed.  The whole point of off-shoring industry and labor arbitrage is to make as big of a profit as possible by spending as little money as possible in countries with no tax burdens and few, if any, labor and environmental laws, and people willing to work for almost nothing.  Globalization is the exploitation of new markets to minimize costs and maximize profits. While the purpose of an economy under a social democratic model is to provide as much employment as possible, neoliberal globalization seeks a system of efficiency that streamlines the upward flow of wealth from the wage slaves to the one percent.  

It is conceivable that someday in the distant future the world will fall into an interlinked global order based on naturalistic economic production regions and shifting import-shifting cities, as described by Jane Jacobs.  But that day, if it ever comes, is both far off and increasingly unlikely and there exists no roadmap of how to get there.29  Certainly a sustainable, steady-state world might have to be more egalitarian than it is today as a part of fundamentally re-conceptualizing the human relationship with nature. But this too is a long way down the road and would have to be imposed by changing circumstances forced by the environment.  We need solutions now, and the clock is ticking.  

For the short term—for the initial steps in a long journey—the best we can hope for is modest and tenuous cooperation among sovereign states to address the big issues facing us: a shotgun marriage forced by circumstances, by intolerable alternatives (an historical analogy might be the U.S.-Soviet alliance in the Second World War, and the effort will have to be like a World War II mobilization only on a vastly larger scale).  We will need states to enforce change locally and international agreements will have to establish what the laws will be.  The problem here is the internal social and political divisions within states that are unlikely to be resolved.  Moreover, immediate local interests will always take priority over what will likely be seen as abstract worldwide issues.  In order to prevent such internal dissent and tribalism, and building on Jacobs’ idea—an ideal world order would have to consist of small regional states that are demographically homogenous (another idea of David Isenbergh).    

Purdy rightfully disdains the disparities of neoliberal globalization but only offers an ill-defined program in which “the fates of rich and poor, rulers and rules” would be tied (presumably the ruling classes would allow the ruled to vote away their power).  The idea here is that famine is not the result of scarcity but rather of distribution. 

If such control and reconfiguration is already possible, then why have even more modest remedies failed to date?  Why not put in place the sensible prescriptions of the environmentalists who embody the “pessimism” and “hubris”? Why stop there?  Why not banish war and bring forth a workers’ paradise? Why not Edward O. Wilson’s Half-Planet goal (see below)?

As regards practicalities of democratic globalization, Purdy’s prescriptions also seem to ignore some inconvenient historical facts.  For instance, as many commentators have observed, the larger and more diverse a population becomes, the less governable it becomes and certainly the less democratic as individual identities and rights are subordinated to the group.  The idea of a progressive social democracy with a very large and diverse population seems unlikely to the point of being a nonstarter.30    

Democracy works best on a local level where people are intimately acquainted with the issues and how they affect their interests—the New England town hall meeting being the archetype for local democracy in this country.  Similarly, the most successful democratic nations have tended to be small countries with small and homogenous populations.  Trying to generalize this model to a burgeoning and increasingly desperate world any time soon is a pipedream.

Ultimately, the problem with the prescription of universal democracy in a technical sense is that democracy, like economies, are naturalistic historical features and are not a-contextual constructs to be cut out and laid down like carpet where and when they are needed.  Democracy must grow from within a cultural/historical framework. It cannot effectively be imposed any more than can a healthy economy.  As Justice Holmes observes in a letter to Harold Laski, “[o]ne can change institution by a fiat but populations only by slow degrees and I don’t believe in millennia.” 

Purdy also seems to conflate democracy with an ethos of liberalism.  Democracy is a form of government by majority rule where liberalism is an outlook based on certain sensibilities.  If a fundamentalist Islamic nation gives its people the franchise—or if a majority of people in an established republic adopt an ideology of far right populism—they will likely not vote for candidates who espouse their own values and interests. Transplanted world democracy and the redistribution of wealth are not likely to work even if the means to implement them existed.

As for the democratic Anthropocene—or any kind of Anthropocene world order—I think that John Gray gets it mostly right, that things will never get that far.  In order to understand the impracticality of this idea, we might consider a simple thought experiment in which we substitute another animal for ourselves.  It is difficult to imagine a living world reduced to a monoculture of a single species of ant or termite, for instance.31  And while humans, like ants (e.g. leafcutters), may utilize the various resources of a robust environment of which we are but but a small subset, it is difficult to imagine nature surviving as a self-supporting system in a reduced state as the symbiotic garden (Gray’s “green desert”) along the periphery of an ant monoculture.  And so we ask: if not ants, then why humans?

In terms of Boolean logic, the reduction of nature to a kept garden—and I am not saying that Purdy goes this far—appears to be an attempt to put a larger category (nature) inside of a smaller one (human civilization), the equivalent of attempting to draw a Venn diagram with a larger circle inside of a smaller one.

Beyond the lack of realism there is also an unrealized immorality to the more extreme Anthropocene points of view.  Letting nature and the possibility of its salvation be lost is a kind of abdication that is not only monumentally arrogant but also ethically monstrous and on a scale far greater than historical categories like slavery or even the worse instances of genocide.  One can only wonder if adherents to the Anthropocene perspectives realize the implications of their prescriptions. 

We now know that the living world is far more conscious, thinking, feeling, more interconnected than we ever before suspected.32 Even the individual cells of our bodies appear to possess a Lamarckian-like interactive intelligence of their own, and we can only begin to guess at the complexities of the overlapping systems of the world biosphere.33  There is no possibly way we can know the implications of lost interrelation of whole strata and ecosystems.  To think that we can manage a vastly reduced portion of the living world to suit our needs is as unethical as it is impractical.  

To give up and say that the world is already wrecked is not the same things as saying that some abstract or hypothetical set or singular category will be lost, but rather that a large part of the sentient world will be destroyed by of us.  To put it more bluntly, how can allowing nature be destroyed—meaning the extinction of perhaps a million or more species and trillions of individual organisms—without attempting the largest possible effort to prevent it, be any less of an atrocity than the Holocaust or slavery?  In an objective biological calculus of biodiversity, it will be many fold worse, even if the ecological declines occurs over a period of lulling gradual change, of terraces of change and plateaus, and human adaption.  A child who has never seen a snowy winter day, snowy egret, or a snow leopard will not miss them any more than a child today misses a Carolina parakeet or Labrador duck.  At worst they will experience a vague sadness for something they never knew, assuming they are even taught about such lost things.  

I mention this (and again, I am not saying that Purdy advocates such a position) because I would like to think that those who subscribe to the Anthropocene perspectives would have willingly fought in WWII, especially if they had been aware of the atrocities of the Nazis and Imperial Japanese.  And yet in a mere two sentences, the author seems to decree an unspecified portion of the living and sentient world to be permanently lost:

“As greenhouse-gas levels rise and the earth’s systems shift, climate change has begun to overwhelm the idea that there is a “nature” to be saved or preserved.  If success means keeping things as they are, we have already failed, probably irrevocably.”

No “nature to be preserved”?  What could this possibly mean?  Could the author mean it literally, that that the living world (to include humans) is lost?  Could he mean “nature” as metaphor (whatever that means)?  As a defunct concept or “construct” of the kind that posmodernists love to contend as half of a false dichotomy?  Are environments like rainforests and reefs metaphors and human constructs?  Since this is a work of nonfiction, I will take him at his literal word, but readily concede that I might be misunderstanding this and other points of his.

And the solution:

“We need new standards for shaping, managing, and living well in a transformed world.”

“Living well,” huh? What could this mean in a world soon to have 8 billion mouths to feed (Scranton, by contrast, tells us that we must learn to die well)?  How is this not Anthropocentrism?  Observe the logic here: when the alternatives are likely failure and unlikely success don’t even try to correct the problem or fix your style of play, simply change the standards and hope for the best.  Move the goalposts to the suit the game you intend to play.  When reality becomes unacceptable, just diminish your expectations and change the parameters of the discussion.  When the Wehrmact overruns Poland, France, and the Low Countries, just write off these areas as newly acquired German provinces and then do business with the new overlords.  After all, solutions have not been forthcoming to date.  He is right that things look beak for the world, but then things looked pretty bleak in 1939 and 1940.

My sense is that beyond the brilliance and kindly nature, there is a desperation behind the outlook.  In his book Purdy asserts the stern banality that “nature will never love us as we love it” as if that was somehow related to the issue, as if to chastise naïve tree huggers with the fact that their embrace is unrequited.  But one gets the sense that he might just as easily be chiding a younger, Thoreau-like Jed Purdy over a lost love that never loved him back.  If an intelligent realization of the amorality of nature has forced him to relinquish the mistaken idea of a beloved and loving nature, perhaps he cannot let go of the universalist ideals of liberal democracy, even above the survival of much of the natural world itself.  A person must believe in something, and it is easier to accept the death of something that never loved us in return.  If we do not hold on to something, what then remains of belief, youthful optimism and of hope for the future beyond youth? Of course the desire for something to be true has no bearing on its actual truth.

What Purdy offers is a liberal humanist “riposte” to the undeniable biological logic of the post-humanist progressives who would extend rights to the non-human world.  Purdy makes an impressive case to preserve liberal humanism, a wholesome human tie to the land, and the dignity (if not actual rights) of animals.   

As intellectual history, After Nature is impressive and besides minor infractions against the language no more serious than a modest penchant for words like “paradigmatic,” much of it is remarkably well-written.  But ultimately the importance of a book is found in the power of its ideas—its insights—rather than in the power of its presentation.  For all of its brilliance, After Nature ends up embracing hopeful speculative generalities that one may infer to be intended as superior and ahead of the pack while seeming to write-off much of the living world.  In his prescriptions he is provincial in his generational ideas—ideas full of historical analysis but shorn of real historically-based policy judgment, ideas which by his own admission will not preserve nature, which he deems a defunct concept and reality. 

A great analyst may fail as a practical policy planner and the stark contrast of this book as legal and political history relative to its prescriptions suggests that this is the case.  Just because you are smart doesn’t mean you are sensible in every case, and just because you write well doesn’t mean you are right.  Great eloquence runs the risk of self-seduction along with the seduction of others; many legal cases are won by the persuasion of presentation rather than on the proximity of the claims of the winning argument to the truth of the matter.  Purdy clearly knows history, but in my opinion, he does not apply his remarkable interpretation of the past toward a realistic end.  As with some lawyers-turned-historians I have known, he seems to overestimate the power and influence of the law and political form (e.g. it was not the Confiscation Acts nor, strictly speaking, the Emancipation Proclamation that destroyed slavery, but rather the Union Army; where the law is not enforced, the law ceases exist as a practical matter), to include those of “democracy” on the course of human events.

Purdy does not face the human fate that Scranton characterizes in Learning How to Die in the Anthropocene.  This is understandable.  What is standalone brilliance and ambition in a dying world?  If Scranton is sensitive and intelligent, then Purdy is too. Perhaps more so, and presumably he has not seen Iraq.

The Grand Old Man of Biology and His Half-Earth

Half-Earth, Our Planet’s fight for Life, New York: W.W. Norton & Company, 2016, 259 pages, $25.95 (hardcover)

The human species is, in a word, an environmental hazard.  It is possible that intelligence in the wrong kind of species was foreordained to be a fatal combination for the biosphere.  Perhaps a law of evolution is that intelligence extinguishes itself.

-Edward O. Wilson

This admittedly dour scenario is based on what can be termed the juggernaut theory of human nature, which holds that people are programmed by their genetic heritage to be so selfish that a sense of global responsibility will come too late.

-Edward O. Wilson

Darwin’s dice have rolled poorly for Earth.

-Edward O. Wilson

In contrast to the three authors I have discussed so far, Edward O. Wilson is an actual scientist.  As one might expect, he is non-judgmental but equally damning his measured observations of the devastation wrought by our kind.34  He is genial and understanding of human flaws, fears, and the will to believe, but retains few illusions an in some ways his analysis is as dire as Gray’s (Wilson coined the term Eremozoic/Eremozcene, the “Era of Solitude”—which he prefers to Anthropocene).35 Unlike the others, Wilson tells us what must we must do to save the planet.  He does not tell us how.

What sets him apart from the others is that he is a world-class biologist, the world authority on ants, and one of the founders of modern sociobiology.  He is intimately acquainted with the problem and has an understanding of how natural systems work that is both broad and deep.  As regards his writing, he is gentle—a good sport by temperament—and has sympathy with people and the human condition with all of its quirks and many faults.  It is striking that this gentleness does not diminish or water down his observations.

Wilson has written a great deal—including 9 books over the age of 80—and has apparently changed his mind on some important issues over the years.  He believes that humans cannot act beyond the natural imperatives that shaped us as creatures, but he does believe that we can learn and change our minds.  It is therefore noteworthy and not a little ironic that John Gray believes that our behavior is inevitable, yet one senses a tone of judgment, while Wilson believes that we may have a choice in what we are doing, and yet is forgiving, even sympathetically coaxing.

In his 2016 book, Half-Earth, Wilson, offers as a solution—a goal rather than a means of achieving it—with the same hemispheric name, a thesis stating that, insofar as possible, in order to save the biosphere, it is necessary to preserve as much of the world’s biodiversity as possible.  To do this, he believes that we must preserve half of the world’s land surface as undisturbed, self-directed habitat.

In a book note in the March 6, 2016 edition of The New Republic titled “A Wild Way to Save the Planet” [https://newrepublic.com/article/130791/wild-way-save-planet], Professor Purdy reviewed Wilson’s book with some prescience and little charity.  Purdy raises some interesting points and is correct that Wilson does not offer a practical step-by-step program or a roadmap toward this goal.  He is also right that Wilson is not at his best when speculating on the natural adaptive purpose of the free market or on population projections and that he betrays a certain political naïveté, but then his importance is not as a social engineer or a practitioner of practical politics. He is a leader of the biodiversity movement and a foundation dedicated to this bears his name.  He is also a Cassandra with the most impressive of credentials relative to his topic.  In terms of contributions and historical reputation, Wilson, who will be 90 next month (June 2019), is the most distinguished of the five commentators discussed here.

In his analysis, and after a grudging if mostly accurate overview of Wilson’s positions and accomplishments,36 Purdy seems to miss the significant of Wilson’s book as a poetic (as opposed to purely analytical) thesis: if we want to save the planet and ourselves, we must preserve the world’s biodiversity and the unfathomable complexity of symbiosis and interconnection of the living world.  If we want to save Nature thus construed, we must dedicate about half of the planet to just leaving it alone (indeed, a plausible argument can be made that, other than setting aside wild areas, the degree to which humankind meddles with nature—even with good intentions—the more harm we do).

Although niches of individual species lost may be quickly filled in an otherwise rich environment, we cannot begin to imagine the implications of the structural damage we do to the overall ecosphere through wholesale destruction of habitat and species.  There may be impossibly complex, butterfly theory-like ripples leading to unforeseen ends.  Damage to the environment is often disproportionate to what we think it might be.37  Nor should we concede that the natural world is hopelessly lost already (in stating that “[i]f success means keeping things as they are, we have already failed, probably irrevocably” Purdy reveals himself to be darker than the “pessimists” who still seek mitigation), and that the goal of some writers on the Anthropocene may be little more than managing what remains of nature.  In contrast, Wilson is not making a “wild” suggestion.  He is telling us what we must do to save the biosphere and ourselves with it.  In this assessment I believe he is correct.

Wilson sees the Anthropocene outlooks and their monocultural goal as pernicious anthropocentrism—a Trojan horse of human arrogance cloaked in the language of stern environmental realism.  He believes that they prescribe a greatly reduced human-nature symbiosis with humans as the senior partner.  Purdy dismisses this assertion in a few clipped assertions with a confidence that underlies so much of his analyses here and elsewhere.  But Wilson’s experience with both the Nature Conservancy and in the academy and statements by the people he cites bears out his beliefs (to be fair, there are degrees of the Anthropocene perspective ranging from the comparatively mild to the extreme). 

Regardless, Purdy does not speak for all Anthropocene points of view—more extreme adherents do in fact couch their positions in terms of a stark and dismissive pseudo-realism that are arrogant.  Purdy seems to concede the danger of “a naturalized version of post-natural human mastery” in his own book (pp. 45-46).  As for the prescriptions of the Anthropocene perspective Wilson criticizes in Half-Earth, it would seem that they are no more realistic than those of a cancer patient who acknowledges his disease but not its terminal nature, or else realizes its seriousness and then adopts a cure that will allow the disease to kill him.  Purdy asserts that Wilson’s goal is itself a reflection of just another Anthropocene outlook.

Does Wilson’s book posit an Anthropocene thesis?  Adherents to the Anthropocene define it variously as the state of affairs where nature has been irreparably damaged or altered by the activities of mankind, and as the dominant species we are thrust into the position of dealing with it one way or another.

Purdy characterizes the Anthropocene as a current that “is marked by increased human interference and diminished human control, all at once, setting free or amplifying destructive forces that put us in the position of destructive apprentices without a master sorcerer.  In this respect, the Anthropocene is not exactly an achievement; it is more nearly a condition that has fallen clattering around our heads.”38 

This is fair enough.  But it is not so much an acknowledgement of the Anthropocene as a fact or a state of affairs that concerns our analysis of Wilson’s outlook (or the term we use to describe it) so much as whether or not his view is an Anthropocene perspective like the ones he criticizes in Half-Earth, and with which Purdy at least in part concurs with in After Nature (i.e. one that has accepted the ruin of the biosphere and which prescribes adaptation over mitigation).

Lawyers quibble over definitions far more than do scientists.  The sides of a good faith critical discussion should agree on terms and proceed from there. Although I find questions over definitions to be inherently uninteresting and unimportant distractions (outside of the law and similar activities), since Purdy makes the claim that Wilson’s Half-Earth thesis is an Anthropocene argument by another name, we might briefly examine if it is.39  

Is the Half-Earth hypothesis an Anthropocene argument?  I think the answer is “no.”  First of all, Wilson admits that the problem is real, that the biomass of the human species is more than 100 times that of any large animal that has ever lived.  But he also believes that the vast majority of species that comprise the current biodiversity of the world can still be preserved (i.e. the Eremocene/Anthropocene is where we are heading, but we are not yet there in any final sense).  This can be done by preserving half of the planet as habitat.  This is not a prescription for a human monoculture with a diminished natural periphery or greenbelt, but the opposite: an accommodation of the natural world as a thing apart from us, a steady-state, hands-off stewardship while curbing our own excesses.  It is mitigation.

My sense is that Wilson’s perspective of the natural world as a “self-willed and self-directed” prior category that is deserving of our protection as remote stewards capable of protection or destruction, is sound.  The biggest part of this protection would be simply leaving it alone rather than a subset to be managed—an adjunct category—or a thing permanently wrecked to be tolerated, and adapted to (as we adapt it to us) insofar as it meets or does not interfere with our needs. 40

But even if Wilson’s admission of the human impact on the biosphere and a set of policies to preserve half of it technically render his argument an Anthropocene perspective, there is still a substantive difference: the difference between attempting to manage nature, and leaving a large portion of it alone.  It is the difference between adaptation to and cultivating unfolding wreckage and mitigation through noninterference. 

In this sense, Wilson’s Half-Earth is not so much an Anthropocene thesis as it is an attempt to preclude a human monoculture by setting aside half of the planet through a policy of noninterference and not involved management.  In taking him at his word, I am inclined to say that Wilson seeks to avoid the Eremocene by preserving diversity, rather than an Anthropocene perspective that declares nature to be dead and aspires to somehow live well among the wreckage.

Purdy correctly writes off Wilson’s view of economic growth as “A naturalized logic of history” and calls it “technocratic” (“technocrat”/“technocratic”/”technocracy” are variations of a favorite smear among the post-Boomer generations, although the word appears to have multiple related but different definitions, one being “a specialized public servant.”  I wonder if they would lump the men and women who implemented the New Deal, the U.S. industrial mobilization during WWII, and the Marshall Plan into this category).  When reading the review I got the feeling that Wilson’s powerful sociobiological arguments rankles Purdy’s strong attraction to democratic theory and related philosophy based on human exceptionality.

Ironically Purdy admonishes the author for providing no blueprint for implementing the half-planet model, yet offers nothing stronger than generalities about global democracy.  He also writes “[a]lthough Wilson aims for the vantage of the universe—who else today calls a book The Meaning of Human Existence?—the strengths and limitations of his standpoint of those of a mind formed in the twentieth-century.”  One could just as reasonably ask: who else today calls a book After Nature, regardless of whether “nature” is intended as metaphor, an outdated concept or construct, the living world and physical universe as things-in-themselves, or some or all of the above? 

Likewise the bit about “the mind formed in the twentieth-century” suggests a tone of generational chauvinism, a latter day echo of “[t]he torch has been passed to a new generation…” perhaps.  He dismisses Wilson’s love of nature as and his general outlook as parochial to the twentieth-century United States—and odd claim to make against the world authority on ants, the man who coined the term biodiversity, the standard-bearer of sociobiology, and a man who was bitten by a rattlesnake as a youth. 

The larger implication of Purdy’s dismissal of Wilson as a well-meaning but ultimately avuncular old provincial is itself a kind of local snobbery and presentism—the apparent assumption that anyone from an older generation is insufficiently evolved or sophisticated in his thinking to embrace the eschatological utopian clichés and bubbles of a later generation (Purdy was born in 1974, and so was therefore no less a product of the twentieth-century than is Wilson).  As such Wilson is a representative of just another misled perspective to be weighted against cutting-edge sensibilities, found wanting, and waved away in spite of a modestly good effort at the end of an impressive career. 

I would venture that Wilson knows both nature and history better than Purdy in terms of experience—he lived through the Great Depression which was also the period of the regional ecological disaster called the Dust Bowl and was a teenager during the Second World War.  These are hardly events likely to instill an excessively benevolent or uncritical view of nature or human nature.  Purdy may be right about the devastation wrought by neoliberal globalization, but I believe he is wrong about Wilson and his goal.  Both men concede the necessity of reconfiguring the human relationship with the planet.  Wilson calls for a “New Enlightenment” and a sensibility “biophilia” [regarding the latter, see The Future of Life, 134-141, 200]  Purdy dismisses Wilson’s feelings toward nature as just more unrequited love.  And yet Wilson’s biophilia does not seem incongruent with Purdy’s own “new sensibilities”.            

When reading Purdy’s review of Wilson’s book, I was reminded of a story of an earlier legal prodigy, Oliver Wendell Holmes, Jr., who, when as a senior at Harvard, presented his Uncle Waldo with an essay criticizing Plato. Emerson’s taciturn reply: “I have read your piece.  When you strike at a king, you must kill him.”41  In spite of some good observations about weak points in Wilson’s outlook (and especially in areas outside of his expertise), Purdy’s review didn’t lay a glove on the great scientist or his general prescription.

Where Purdy is right is in the failure of human self-restraint to materialize on a scale to save the planet.  Decades of dire warnings from environmentalists have failed to arouse the world to action.  It seems unlikely that Wilson’s prescription will be anymore successful.  What is required is drastic, top-down action by the nations of the world.  I will discuss this in a later post. 

My reading of Wilson is that the Half-Earth goal is what needs to be done in order to save the world’s biodiversity to include humankind as a small categorical subset.  He leaves the messy and inconvenient details to others.  Wilson and his idea are very much alive and if we wish to remain so, we must take it to heart.  As a person schooled in realism, I have long believed that if necessary measures are rendered impracticable under the existing power or social structure, then it is the structure and not the remedy that is unrealistic.  But the prescription has to be possible to begin with.  Let this be the cautionary admonition of this essay.      

My sense is that Wilson is right, but that his prescription is unlikely to be realized.  In my next post, I will offer what I believe could be a general outline to save the planet from environmental catastrophe.  

Adam Frank and the Biosphere Interpretation: the Anthropocene in Wide-Angle

Adam Frank, Light of the Stars, Alien Worlds and the Fate of the Earth,                               New York: W.W. Norton & Company, 1018, 229 pages.

Disclaimer: I am currently still reading this book (Frank gave an admirable summary of his ideas in an interview with Chris Hedges on the program On Contact).

Any book with endorsements by Sean Carroll, Martin Rees, and Lee Smolin on the dust jacket is likely to catch the attention of those of us who dabble in cosmology.  Adam Frank’s book is not about cosmological speculation or extrapolations of theoretical physics.  It is about the environment in the broadest of contexts.  It characterizes two distinct but overlapping worlds that ultimately merge.  The first is a view of life in a cosmic sense and the other is about life and civilization in a human context and scale.

On the first point, Frank sees the Anthropocene as just another transition: humans may be causing mass extinctions, but as mammals we are equally the product of a mass extinction (the extinction of the dinosaurs allowed mammals to rise to come to the fore).  Hey, these things happen and some good may come out of them—we did.  Life will go on even if we don’t and if we ruin the world as we knew it, relax—nature with deal with it after we are gone and will create something altogether new out of the wreckage.  The Anthropocene may be bad for us—and many of our contemporary species—but we are simply “pushing” nature “into a new era” in which Earth will formulate new experiments (as all life, individual creatures, species, and periods of natural history are experiments).  We are just another experiment ourselves, quite possibly a failed one (and, if we really screw things up, the Earth might end up as a lifeless hulk like Mars or Venus). 

This larger amoral picture—although undoubtedly true—seems ironic coming from someone as affable, as glib as Frank.  But the wide-angle gets even wider.  When talking in astrological terms, it is inevitable that any thinking reed will be dazzled by the numbers and characterizations of the dimensions of the night sky, of our own galaxy and the uncounted billions of others scattered across observable universe beyond it.  In this respect, Frank (like any astronomer or astrophysicists throwing numbers out about the cosmos) does not disappoint.  If he had left things here, I would conclude that he is likely right, but that no thinking, feeling being could surrender to such fatalism without a fight. After all nature makes no moral suppositions, but moral creatures do.  But he does not stop there.   

Over the expanse of our galaxy (to say nothing of the observable universe), it is likely that life is common or at least exists in numerous places among the planets orbiting countless trillions of stars in the hundreds of billions of galaxies.  It seems likely that humans are rendering the Holocene as a failed phase of the experiment, because it produced us.  But life will likely persist in some form regardless of how things turn out here.

Where Frank transitions from the very large to the merely human, he synthesizes the amorality of Gray with the mythmaking of Scranton toward an end perhaps along the lines of Wilson.  Unlike Gray there is no tone of judgment or chastisement.  On the contrary, he believes that the whole good versus bad placing of blame of the various “we suck” perspectives should be avoided: our nature absolves us from judgment; we are just doing what any intelligent (if immature) animal would do in our situation.

Frank analogizes humankind to a teenager—an intelligent, if inexperienced, self-centered willful being who assumes that his/her problems are uniquely their own and therefore have never been experienced by anyone else before.  He assumes that the sheer numbers of planets in our neighborhood of the Milky Way suggest that there are plenty of other “teenagers” in the neighborhood, some of whom have died of their folly and the inability to change their ways.  Others may have learned and adapted.  As for us, we need to grow up, change our attitude, and learn to sing a new and more mature song.  Frank sees the human capacity for narrative as the way out, except, unlike Scranton, he believes new myths to be our potential salvation rather than just a way to die with meaning.

In an interesting parallel to Frank’s view of humans as cosmic teenagers, Wilson characterizes us and our civilization in the following terms: “We have created Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology.  We thrash about.  We are terribly confused by the mere fact of our existence, and the danger to ourselves and the rest of life.” [See Ch. 1 “The Human Condition” in Wilson’s The Social Conquest of Earth, p. 7].  So how are we supposed to grow up?

According to Frank, in order to reach a steady-state level of human life on the planet, we need new myths about what is happening in order to drive “new evolutionary behavior.”  We need narratives that will not only allow nature to proceed (a la Wilson), but which actually enhances nature—make a vibrant biosphere that is even more productive.  The new narratives would provide “a sense of meaning against the universe.”  They will be a way out.   On this point he is like Wilson in his attempt to merge the arts and science to address the problems and embrace an all-loving biophilia.

As with Purdy, Scranton, and Wilson, Frank believes that a global egalitarianism would be necessary to achieve a steady state.  Once again the problem is how to do it.  How do we generate these narratives in a world where some powerful leaders do not concede that there is even a problem?  If the threat of nuclear annihilation and the urging of a world-historical intellect like Albert Einstein after the bloodiest war in all human history did not push humankind even an inch toward merging into a single egalitarian tribe, one must wonder if anything can (and the history of the past century shows, that when you redistribute wealth you only standardize misery).  In 1946 everybody believed that the atom bomb existed, while today, there are powerful interests and world leaders who still deny the reality of human-caused climate change. Human beings would have to completely reconfigure our relationship with nature and with each other and do it in the immediate future.  Could this be done even at the gunpoint of environmental catastrophe?  How would a candidate in a democratic system in a wealthy nation pitch such transformation to the electorate?  Again: how do we get there?  As they say Down East, you can’t get there from here.

Similarly, Frank’s analogy of humankind to a self-absorbed teenager is suggestive, but is the comparison supposed to fit into a context of a lifecycle that is historical or natural historical (i.e. is he talking about an adolescent in the context of human civilization as a phenomenon of 9,000 years, or of a species that is 200,000 years old?)?  If his idea is that our species has an outlook that is adolescent in terms of evolutionary development, then it seems unlikely that we can grow up quickly enough to become a bona fide adult, that the necessary maturity to turn things around will not occur in the timeframe in which the environmental crises will unfold.  Wilson talks in similar terms in at least one of his books, that we must start thinking maturely as a species un-tethered from old theistic myths and tribalism.  And yet the current state of affairs suggests that we are as far away from that point as ever, that such tribalistic tendencies as ethnic nationalism and fundamentalist religion are as strong as ever.  The human nature analogized by Frank and Wilson are not just sticking points to be overcome or hurdles to be jumped, but rather central facts of our animal nature that currently appear to be insurmountable.

One small issue I have with the book is the fact that the existence of life and civilizations on other planets is at this point purely conjectural.  The dazzling numbers Frank presents plausibly suggest that life may be fairly common—indeed, the numbers make it seem almost ridiculous to think otherwise.  But, if I recall my critical rationalist philosophy correctly, it is impossible to falsify probability, and at this point, such a claim is pure speculative probability rather than actual observation or corroboration.  He talks about a conjectural “great filter”—the idea that intelligent life kills itself off (if its maturity is far behind its intelligence).  Another pregnant conjecture.

What I liked especially was his description of James Lovelock and Lynn Marguis’s Gaia hypothesis, that life is an active “player” in the environmental crisis and that it is able to keep the atmosphere oxygen rich by preventing its combination with compounds thus resulting in an oxygen-free “dead chemical equilibrium” like the atmospheres of Mars and Venus.  The biosphere therefore acts as a regulator keeping oxygen at a near-optimal 21% level of the atmospheric mix (it was not clear to me how severe periods such as ice ages fit into the “regulation” of the environment). This regulated balance is called a “steady state” (Lovelock analogizes this to the way the body of a warm-blooded organism regulates its temperature).  Lovelock intended to call this idea the “Self-regulating Earth System Theory,” but at the urging of William “Lord of the Flies” Golding, settled instead on the more poetic “Gaia.” 

With an interested “in the question of atmospheric oxygen and its microbial origin,” Lynn Margulis, wife of Carl Sagan, teamed up with Lovelock in 1970.  As Frank notes, “[w]here lovelock brought the top-down perspective of physics and chemistry, Margulis brought the essential bottom-up view of microbial life in all its plenitude and power” [p. 125].  Frank observes that “[t]he essence of Gaia theory, as elaborated in papers by Lovelock and Margulis, lies the concept of feedback that we first encountered in considering the greenhouse effect” [p. 125] and “Lovelock and Marguils were offering a scientific narrative whose ties to the scale of world-building myth were explicit” [p. 127].  As an observation statement, it would seem that the Gaia hypothesis characterizing a “self-regulating planetary system,” an observable phenomenon is something close to a scientific organon supported by Lovelock’s ingenious “Daisyworld” thought experiment; whether or not the biosphere is a singular living entity that will eliminate humans as a pathogen would still seem to be a metaphysical assertion.  

Unfortunately, this is as far as I have read in his book

Conclusion 

In building on Frank’s example of humanity as an experiment flirting with failure, a friend of mine suggested the comparison of the individual human being in a time of collapse to an individual cancer cell.  Imagine that such a cell was somehow conscious and could reflect on its complicity in killing a person.  It might express regret yet philosophically conclude “but what can I do? I am a cancer cell.”  So it is with people and their kind.  Is this a denial of agency or a facing of facts?  Is it an admission that human beings—neither good nor bad in the broad focus of nature (although objectively out of balance with its environment)—are like cancer cells killing a person regardless of personal moral inclinations?  We are just the latest imbalance—like the asteroid (or whatever it was) that killed off the dinosaurs, and the other things that caused the other great extinctions of the Earth’s natural history.  And so we arrive back at John Gray and biological destiny. 

But even if we are cancer cells or merely a rapacious primate, I don’t accept such a fate—again, Nietzsche’s Will.  We are also a “thinking reed.” Even if there is no free will, there is still a will with an ability to learn from mistakes and experience—we must act as if there is free will.  Gray’s outlook might be a true position and yet no person as an ethical agency can morally abide by it.  We are audacious monkeys and have to answer two questions: can we rise above our biology through reason and moderation and solve the seemingly insurmountable problems resulting from our own nature, and will we?  I believe that he answer to the first is a cautions “yes,” The answer to the second question however may well render it an academic point.  

Consider the following historical thought experiment, also suggested to me by David Isenbergh:  Imagine if you could return to the late Western Roman Empire a few decades before it collapsed.  You see all of the imbalances, injustice, and misery from that period.  You identify yourself as a traveler from the future and tell the people you meet (you can obviously speak fifth-century Latin) that if they and their civilization did not reform their ways, there would be an apocalyptic collapse that would result in 500 years of even greater darkness and misery.  Suppose too that you even were even able to get this message to the powers that be.  Do you think you would be listened to, or would you be treated as mad as events continued unaltered on their way to disaster?  As I have noted elsewhere, in a world of the blind, a clear-sighted man would not be treated as a king, but rather a lunatic or heretic and would likely be burned at the stake if caught.   

In Malthusian terms, we are a global plague species.  In geological/astronomical terms, we are just the latest phenomenon to fundamentally alter and test the resilience of life on Earth.  But even if these observations are true, we are also moral beings, and to embrace them as inevitable and to recommend a posture of adaptation and wishful thinking that the planet will not deteriorate as far as the chemical equlibria of Mars and Venus, is the equivalent of justifying WWII by pointing to postwar successes of Germany, Japan, and Israel (as regards the former two, one could make the observation that sanity followed psychosis).

At the end of the review of Scranton’s Learning to Die in the Anthropocene, I asked: what is meaning in a dying world?  I will add only this: if the human story is coming to a close, then there is one great if austere luxury of being a part of this time that is as interesting as it is unsettling.  As individuals, we never know the full story of our lives until the very end (if even then).  If the end of progressive civilization is upon us in a matter of decades, then we have a greater and fuller understanding of the overall human project than any people at any time in history.  Rather than narratives of progress or decline, agrarian or democratic myths, historicist cycles or eschatology heading toward a terrestrial or providential endgame of history with salvation at the end, we may come to learn that history was just the progress of a plague species toward its own destruction by the means of its extended phenotype that we call civilization.

Finally, one of the things I have taken away from these six books and from my own discussions on the topic is that there are two powerful generational disconnects at play.  I have noticed a powerful generational disconnect common among older people (say, over 80) who have little or no idea of the scale of the problems facing us—that modernity, civilization, their species generally are already failed projects—but who have a certain understanding of history. 

The other disconnect is among young people who are far more in touch with ecological issues, see the problems for what they are, and whose various diagnose and potential remedies are at least on the scale of the problems, but whose prescriptions are unrealistic to the point of utopian absurdity.  On this point, Purdy and Scranton are anomalies who know history as well as anybody, but who seem to take after others of their generation (and the subsequent generation) in being unable to apply its lessons.  Frank and Wilson know natural history and yet also speak of a global egalitarian regime.  To be fair, nobody has an answer, and even the one I find to be most realistic, when walked through step-by-step ends up as being something akin to utopian itself.

Several times I have analogized the crises of the environment to the early phases of WWII.  The current situation is unlikely to unfold as quickly as that conflict, and it is difficult to know the point in the conflict at which we find our selves by analogy.  It is unclear whether we are at the point in history analogous to the doomed conference at Versailles, the Japanese invasion of Manchuria, the Occupation of the Rhineland, the Spanish Civil War, the Czechoslovakian crisis, the invasion of Poland or France, Operation Barbarossa, or the attack on Pearl Harbor.  As I noted in the introduction, it is also unclear when we will cross a point of no return.  Are we to be Churchills and Roosevelts, or are we to surrender to our fate? 

There is a difference however. The solutions that brought WWII to a successful conclusion for the Allies were devised and implemented within an existing context of human life that was more or less the same after the war, that of modern industrial civilization. If we are to successfully address the environmental crises, we will have to fundamentally reconfigure, reorient the human relationship with the biosphere. Rather than an array of robust imaginative domestic measures to fix large but essentially conventional problems (economic depression, a global total war), the solutions now needed would be akin to forcing an entirely new phase of human civilization, like the shifts from hunter-gatherer life to agricultural (and urban) life, or agrarian civilization to industrial.

Sometime later this year or early next year, I hope to post another insufferably long discourse on how we might chance to turn things around.

Notes

  1. William Strunk, Jr. and E.B. White, The Elements of Style, New York: Macmillan Publishing Co., Inc., 3rd ed., 1979, pp. 71-72, 80.
  2. For the Eremocene or “Age of Loneliness,” see Edward O. Wilson, Half-Earth, Our Planet’s Fight for Life, Nee York: W.W. Norton & Company, 2016, p. 20.  For Anthropcene, or “Epoch of Man,” see p. 9.
  3. David Archer, The Long Thaw, Princeton University Press, 2009, p. 1.
  4. On political disputes disguised as scientific debates see Leonard Susskind, The Black Hole War, Boston: Little Brown and Company, 2008, 445-446.
  5. Roy Scranton, Learning to Die in the Anthropocene, San Francisco: City Lights Books, 2015, p. 14.
  6. Elizabeth Kolbert, The Sixth Extinction, New York: Henry Holt and Company, 2014,and Field Notes from a Catastrophe, New York: Bloomsbury, 2006 (2015).
  7. See generally Edward O. Wilson, The Future of Life, New York: Alfred A. Knopf, 2002.
  8. Alasdair Wilkins, “The Last Mammoths Died Out Just 3,600 Years Ago,  But They Should Have Survived,” March 25, 2012).
  9. Gray cites this term to Wilson’s O. Wilson in Consilience, New York, Alfred A. Knopf, 1998. Apparently Wilson also denies “that humans are exempt from the processes that govern the lives of all other animals.”  Wilson uses the similar term Eremocene in Half-Earth, p. 20.
  10. Edward O. Wilson, The Future of Life, p. 29. 
  11. See Karl Popper’s essay “A Relist View of Logic, History, and Physics” in Objective Knowledge, Oxford: Clarendon, 1979 (revised ed.), 285.
  12. George Kennan, Around the Cragged Hill, New York: W.W. Norton & Company, 1993, p. 142.
  13. Regarding the regulation of gases in the atmosphere, see Lynn Margulis, Symbiotic Planet, 113-128.
  14. On stable equlibria and tipping points, see generally, Per Bak, How Nature Works, Springer-Verlad New York, Inc., 2006.
  15. For Gray’s perplexing views of conscious and artificial intelligence, see Straw Dogs, pp. 187-189.                              We do not even know what consciousness it. It is therefore remarkable that Gray can assert that machines “will do more than become conscious. They will become spiritual beings spiritual beings, whose inner life whose conscious thought is no more limited by conscious thought than ours.” Leaving aside weasel words like “spiritual,” it seems likely that if machines ever do become conscious, it will be the result of an uncontrolled emergent process (the way that consciousness arose as a natural phenomena), and not the product of technological progress along the current lines of algorithms and hardware. Consciousness appears to be the result of the physical (biological/electrochemical) processes of the brain.  As anyone who has know someone with a brain injury, mental illness, or Alzheimer’s disease knows, to the degree that the brain is damaged, diseased, damaged, or otherwise diminished, the mind diminishes correspondingly, if unpredictably.  And yet like all phenomena emerging from more primal categories, the mind is not fully reducible to physical processes.  The objections to the reduction of consciousness to “mechanical principles” made by Leibniz in his Monadology, are as alive and well today as they were in 1714.  See G.W. Leibniz’s Monadology, An Edition for Students, University of Pittsburgh Press, 1991, Section 17, pp. 19, 83-87.
  16. For Gray’s prescription for the human predicament, see Straw Dogs, pp. 197-199. His idea of “the true objects of contemplation” and his “aim of life as simply to see” are sensible if austere goals toward greater intellectual and psychological honesty, and are reminiscent of Nietzsche’s idea of “forgetfulness” expounded in Section 1 his “On the Uses and Disadvantages of History for Life.  But where Nietzsche advocates animal forgetfulness to allow people the freedom to act forthrightly and without inhibitions, Gray believes that action only makes contemplation possible and that the real goal is understanding without myths, false self-awareness, and the illusion of meaning.  See Untimely Meditations, Cambridge University Press, R. J. Hollingdale, trans., 1983 [1874] pp. 60-67. As regards the environmental crises, Nietzsche’s prescription would allow for action (although action without historical memory would seem to be a recipe for catastrophe as a basis for policy), where Gray would allow for a dispelling of illusions for which others might allow for meaningful action even if Gray does not believe it is possible.  His idea also has a curious, if inverse relationship to that of Roy Scranton in Learning to Die in the Anthropocene.
  17. See Holmes’s letter to Lewis Einstein dated May 19, 1927 in The Holmes-Einstein Letters New York: St. Martins Press, 1964) 264-268. On this point, Holmes is quoting Clarence Day from his book This Simian World.  
  18. Charles Dickens, A Christmas Carol.
  19. Malthus speaks of the leveling of population to match resources, p. 61.
  20. By “closed” I mean deterministic.  See generally, Karl Popper, The Open Universe, London: Routledge, 1982. In a closed universe, all events are determined and may perhaps exist in the future if time as characterized by Einstein’s block universe model is correct.  As Popper observes, in a closed universe, every event must be determined where “if at least one (future) event is not predetermined, determinism is to be rejected, and indeterminism is true” (p. 6).  In a closed universe there is chaos (deterministic disorder), and in an open universe there is randomness (objective disorder), and therefore the possibility of novelty and freedom.
  21. This analogy was suggested to me by David Isenbergh.
  22.  See Chapter 10,“Fecundity,” in Pilgrim at Tinker Creek, New York, HaperCollins, 1974, pp. 161-183.
  23. For instance, see his reply to Jedediah Purdy in the January 11, 2016 number of Boston Review.
  24. See generally, Robert D. Kaplan, The Coming Anarchy, Shattering the Dreams of the Post Cold War, New York: Random House, 2000.
  25. For instance see Thomas Cahill’s popular history How the Irish Saved Civilization, and Barbra Tuchman’s chapter “Is this the End of the World: The Black Death,” in A Distant Mirror.
  26. Wilson, The Future of Life, 27.
  27. The Open Society and Its Enemies,  Princeton University Press, 2013 [1945], p. xliv.
  28. William Tecumseh Sherman, letter to James M. Calhoun, et al.  September 12, 1864.  Sherman’s Civil War, Selected Correspondences of William T. Sherman, 1860-1865, Brooks D. Simpson and Jean D. Berlin, eds., Chapel Hill: University of North Carolina Press, 1999, pp. 707-709.
  29. According to Jane Jacobs, the way that healthy economies arise is through the naturalistic grown based on the natural and human resources of a region and import-shifting cities.  This cannot be forced or created as a part of a top-down plan (unless it is to simply rebuild existing systems as with the Marshall Plan after WWII).  See generally Jane Jacobs, Cities and the Wealth of Nations, New York: Random House, 1984                                                                            The idea of correcting economic imbalances through structural remedies would probably make bad situations even worse. My reading of historical events like the Russian Revolution and the period following the Chinese Civil War is that attempts to redistribute wealth only standardizes misery outside of the rising clique, the new elites.  As David Isenbergh observes, power concentrates, and when it does, the new elites tend to act as badly as the old ones.  This is one reason why Marxism—although insightful in its historical observations—fails utterly in its prescriptions.
  30. As the late Tony Judt observes, “[t]here may be something inherently selfish about the social service state of the mid-20th century. Blessed with the good fortune of ethnic homogeneity and a small, educated population where almost everyone could recognize themselves in everyone else.” See Tony Judt’s Ill Fares the Land, New York: The Penguin Press, 2010.
  31. The analogy of a world dominated by ants or termites was suggested to my by David Isenebrgh.
  32. See Carl Saffina, Beyond Words, What Animals Think, New York: Henry Holt and Company, 2015, and Bernard Heinrich, Mind of the Raven, New York: HarperCollins, 1999.  See also Frans De Waal, Are We Smart Enough to Know how Smart Animals Are?, New York: W.W. Norton & Company, 2016, and Mama’s Last Hug, New York: W.W. Norton & Company, 2019.     
  1. On cellular intelligence, see James Shapiro, Evolution, a View from the 21st Century, Saddle River, NJ: FT Press, 2011. On symbiosis, see Lyn Margulis, Symbiotic Planet, New York: Basic Books, 1999.  Elizabeth Kolbert, The Sixth Extinction, New York: Henry Holt and Company, 2014,and Field Notes from a Catastrophe, New York: Bloomsbury, 2006 (2015).
  2. For instance, see generally Edward O. Wilson’s The Future of Life, New York: Alfred A. Knopf, 2002, pp. 22-41.
  3.  See note 2.   
  4.  For instance, Purdy states that “Wilson is in the minority of evolutionary theorists in arguing that human evolution is split between two levels of selection: individual selection, which favors selfish genes and groups.”  I have not polled evolutionary scientists about whether or not they accept multi-level evolution, but it is safe to say that it is the not a radical idea of an apostate minority.  Although not embraced by “selfish gene” ultra-Darwinists, multi-level selection a widely-accepted idea among evolutionary biologists sometimes called “naturalist” Darwinists (see generally Niles Eldridge, Reinventing Darwin, 1997.  See also Stephen Jay Gould, The Structrue of Evolutionary Theory, 2002).  Multi-level selection was first speculated on by Darwin himself and finds its origins in The Dissent of Man, 1871, p. 166.  “It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over other men of the same tribe, yet that an increase in the number of well-endowed men and the advancement in the standard of morality will certainly give an immense advantage to one tribe over another.”
  5. There are formulas to predict the loss of biodiversity relative to loss of habitat, so of which decreases by smaller fractions. Edward O. Wilson, Half-Earth.
  6. Boston Review, January 11, 2016).
  7. On the unimportance of definitions in critical discussions, see Karl Popper, Objective Knowledge, 58, 309-311, 328.
  8. See Wilson, Half-Earth, pp. 77-78.  In response to a question on this point during a discussion and book signing on November 16, 2016, David Biello gave a similar interpretation of Wilson’s perspective.  Biello book is The Unnatural World, New York: Scribner, 2016.
  9. Mark DeWolfe Howe, Justice Holmes: The Shaping Years, 1841-1870, Cambridge, Belknap Press, 1957, 154. 

The Four Categories of The Establishment

By Michael F. Duggan

In this posting, I would like to propose an integrated way of thinking about political and policy leadership and advisement in terms of categories defined by personality type as well as by role and function.  Although I do not subscribe to the fallacy of psychologism—reducing a person’s ideas to their mental state instead of taking the concepts on their merits—I do believe that personality does play a role in the ideas one chooses and therefore in one’s policy outlook.  I do not know if anyone has suggested a similar model, but I am not aware of any.

Rather than examining policy outlooks on a conventional ideological spectrum from left to right (although these categories certainly fit into my scheme), perhaps we should look at them in terms of how categories of policy outlooks exist in relative proximity to each other in terms of degree of moderate-to-severe, by categories of temperament/personality/imagination, and by type in terms of approach/function in implementing policy.  Some categories are ideologically neutral and take on the doctrinal coloration of their milieu.  Because of this, my model has elements of a scale and a spectrum.  The idea is not to look at these things in terms not entirely reducible to ideology (which it treats only as a single factor or intensifier), but rather how they function in the real world in regard to competing individuals and their policy positions. 

In policy, as in business, these categories of leaders and advisors are Conventionalists, The Establishment (and Establishment Types), Mavericks, and Rogues. These categories are seldom found in unalloyed form, and they may overlap, influence, build upon, and cross-pollinate with each other, even in a single person.  There are also multitudes of followers who also break down along these lines.  This is not a completely fleshed-out idea, but one that I am just throwing out in nascent form.  Per usual I wrote this very quickly, so please forgive any mechanical errors

Conventionalists
Conventionalists are men and women who subordinate their views to the perspective a la mode and whose allegiance to these outlooks and regard as necessary in order to advance themselves. These operators act with an eye to the powers that be who promote and embody the dominant ideology of the time.

The Conventionalists are often careerists and credentialists, even though credentials are seen as value-neutral instruments necessary to get ahead.  In periods of sensible policy outlook, these people can be constructive in that they reinforce positive trends by their numbers, if not a strong commitment to the good ideas.  They blow with the wind. 

Beyond self-interest, the perspective of the Conventionalist is often (at least publicly) non-ideological in a negative sense (realism may also be non-ideological, but has been constructive in its commitment to practical goals and in its result-oriented flexibility).  The Conventionalist point of view is tends toward moral neutrality or petty, functional psychopathy and the amoral sensibility that whatever advances one’s career is by definition good, regardless of the ethical and practical consequences.  Such people will adhere to failed policy as long as it continues to be the dominant outlook or until they adumbrate its failure and the outlook that will succeed it.  The driving forces in this type are the ego, vanity, and the power drive.  

Today the Conventionalist embraces and reinforces the orthodoxy of the Washington Consensus, the outlook of the DNC and RNC and “The Blob” of the U.S. foreign policy Establishment. This ideology subscribes to neoconservatism/neoliberalism, economic globalization, a domestic economy founded on Big Finance and an ever-growing split between high-end and low-end services, and U.S. military hegemony and the industries related to it.  By virtue of the dominance of this outlook in the upper reaches of the government, it has been the controlling view of the Establishment in recent years.  As Andrew Bacevich and others have observed, you will not get anywhere in government today if you do not swear allegiance to this “deeply pernicious collective naivete” (see America’s War for the Greater Middle East, 363).  An Establishment characterized by lock-brain conformity to shared assumptions drives the dominance of conventionalism at all levels of policy.  Individuals of this type should not be confused with lower level career government servants that are the backbone of the Federal Government and tend to avoid the political intrigues of successive administrations.

We should note that a good (i.e. loyal or compliant/cooperative) subordinate may be a genuine protégé, or he/she may be an earnest believer in a different outlook biding his/her time (e.g. the “good soldier,” the conservative William Howard Taft, during the more progressive administration of Theodore Roosevelt).  On a less positive note, he or she may equally be an opportunistic true believer playing the part of the sycophant and waiting for their time.  One of the most common things in Washington, D.C. is the true believer boss cultivating true believer underlings.

The Establishment and Establishment Types
The Establishment is the governing mean, the formal and informal structural context in which all of these types exist and operate.  It is a median (and medium) of people and outlook. It is in principle value-neutral but it always takes on the character and ideology of the people in it (today this is the Neoliberal; in the late 1940s it was dominated by moderate realists who were increasingly replaced by hardliners).  It is the generalized governmental temperament of a period, an aggregate of multiple perspectives into a status quo in which strong-minded individuals may divide the policy community into camps—into a majority as well as influential plurality and minority outlooks.  The dominant of these is the official view of the government, although historically, there have often been balancing and countervailing currents.   

An Establishment representing the outlook of an administration may avail itself of Mavericks (see below) and take on the character of their ideas (e.g. the New Deal, the Marshall Plan).  As a thing-in-being, there is always an Establishment of strong players in the system, and it seems counterintuitive to have an Establishment without a dominant view.  As with nature, a policy environment hates a vacuum and a strong personality or coalition will tip an unstable equilibrium one direction or another.  On a related note, the best presidents are always at the heart of their administration, and therefore determine or heavily influence the direction of the Establishment of their times. There are always balancing elements, resistance, and cross currents from other bastions of power and estates of the sovereign whole or aggregate.

Because the Establishment takes on the character of the dominant perspective (which can be top-down), it is altogether possible to have a Maverick or even a Rogue Establishment.  The most constructive periods of the American Establishment are those that utilize constructive/innovative ideas of Mavericks (as with the New Deal—Roosevelt was both a Maverick and Establishment Type who listened to and employed the energies of many Maverick public servants).  In terms of historical context, it is tempting—at least for me—to measure the Establishment of prior and successive periods by the baselines of the social democratic domestic Establishment of 1933-1970 (or thereabout), and the foreign policy and military Establishment of 1939-1949 (or thereabout). 

Leaders and the Establishments they head vary with the policy context and situational dictates of the time.  A sensitive leader intuits what political approach is called for and then attempts to meets those needs in terms of leadership, management, and policy/goals.  History there haven been Bringers of Order (Charlemagne, Alfred the Great, other notable leaders of the late Dark Ages that allowed for the conditions for comparative order of Later Middle Ages), Caretakers/Preserves of the Status Quo (most of the U.S. presidents between Lincoln and Theodore Roosevelt), Conservative Reformers (Grover Cleveland, and the early Theodore Roosevelt), Progressives (President and Bull Moose Candidate Theodore Roosevelt, Woodrow Wilson—the latter in a economic, if not social justice sense), and Transformers (the Founders/Framers taken as a whole, Lincoln, Franklin Roosevelt).  There is also an often corrupt category (e.g. urban political machines based on ethnicity and identity) that picks up the slack when the official governmental structure is insufficient or is not doing its job. This latter category, although often unsavory, is just as often constructive.

The Establishment Type
There is a distinction to be made between the Establishment Type and the The Establishment.  The Establishment Type tends to be temperamentally conservative, but the best are innovators and readily embrace utilize Mavericks, their ideas, and prescriptions (e.g. George C. Marshall as Secretary of State with Kennan as the Director of The Office of Policy Planning).  They differ from Conventionalists in that they put the system and its well being above themselves and their ambitions and in the fact that they seek to do what is right in a broader sense than mere careerism. 

The best Establishment Types employ the creativity of mavericks, and manage and contain rogues.  In bad times, Establishment Types balance and stabilize.  Under good leadership they are also a positive element. They subordinate their careers to duty and service.  Under effective leadership, they tend to rise on a basis of merit rather than credentials. The best of this sort would include the the New Deal Cabinet and the Wise Men of the 1940s such as Charles Bohlen, Averell Harriman, Harry Hopkins, Robert Lovett, George Marshall, and John McCloy.      

Mavericks
Mavericks are the idea men and women—intellectuals—and may be practical or impractical (or even utopian), constructive or pernicious.  The best of these are Cassandras and Jeremiahs who do not rely on theories so much as insight and may design doctrines of their own but may equally be the worst of true believers touting rigid ideology and dogma.  The former are the intuitive creative types who see things before others do and more accurately and are able to effectively plan accordingly.  More generally defined, Mavericks can be vigorous and influential intellectuals of any ideological stripe.  In some instances they may embody the cutting edge of the zeitgeist of the times, but may come to be regarded as ambiguous or even harmful in a larger historical context and retrospect (e.g. the navalist historian and policy theorist, Alfred Thayer Mahan in driving imperialism and the pre-World War One naval arms race).  

Mavericks are weighed in terms of the effectiveness of their policy prescriptions. In an administrative sense, Mavericks are measured in the degree of their influence as well as their distance from the previous status quo of the Establishment and the centrality of their role in creating a new one.  This is why a moderate realist like George Kennan, who had studied history and knew what worked in the past and what did not and why is as much as a maverick as the first Neoconservatives, who were true believers in a theoretical ideology with questionable historical antecedents.  Kennan’s influence contributed to a moderate, if short lived realist Establishment that was quickly supplanted by more ideological mavericks like Acheson, Nitze, and Dulles.

The best Mavericks are insightful creative types who “think outside-of-the-box” (to use an inside-the-box cliché) and devise imaginative policy solutions.  The worst are true believers or else cynics implementing the desires of powerful interests both inside and outside of government.

The “Good” Maverick
“Good” Mavericks tend to be high-minded realists who see each new situation with fresh eyes and without assumptions other than a broad and deep base of intimate and formal historical knowledge.  Some are outsiders who made it on merit (Hamilton, Kennan).  This type of advisor may seem to be inconsistent by unimaginative Conventionalists and bad or “Malignant” Mavericks when they (Good Mavericks) prescribe different responses to superficially similar situations that are fundamentally dissimilar or when an idea or approach did not produce favorable results when first used. The Good Maverick eschews ideology, group think, and over-reliance on theories and simple formulas.  Historically they have often been a special kind of outsider who succeeded on a basis of merit and insight. To work effectively, this type must be allowed space for creativity and a free hand (as with Kennan in the Office of Policy Planning, and Kelly Johnson in his Lockheed “Skunk Works”). We live in a time that despises constructive Mavericks in policy.

Given the policy types I have already mentioned, it is noteworthy that in my scheme, Mavericks shake things up, where Establishment Types tend to embrace order and the status quo but may be open to new ideas.  It is possible for the dominant strata of an Establishment to be comprised of Good Mavericks co-mingled with Establishment Types (e.g. Harriman, Kennan, Lovett, and McCloy during the immediate Post-WWII era) or else true believers (e.g. John Hay, Henry Cabot Lodge, Alfred Thayer Mahan, Theodore Roosevelt, and Elihu Root, during the age of American imperialism).

It is notable that great leaders, although often difficult to categorize or analyze in terms of systems and general reductions, must have qualities of the Maverick along with the balance, leadership, and management skills to direct the Establishment and lead the electorate.

The “Malignant” Maverick
These are the influential ideologues or true believers in theories who are able to influence leaders and colleagues, and influence policy and the nature/direction of the Establishment. They may do this with native charisma, force of personality, and the skills of departmental and political infighting.  They typically have a showy, if narrow and superficially impressive intellect that may dazzle and persuade. In extreme form they may become Rogues.  We live in a time in which this kind of Maverick has set the keynote for the Establishment.

Rogues
Rogues are the self-interested adventurers, the authoritarian lovers of power for its own sake and for gratification of the ego, the borderline or bona fide sociopathic businessman/woman, plutocrat, or military leader.  Rogues are a more extreme hybrid of the Careerist and the Maverick and may appear to be the latter (or, rather, individuals of the latter category, unchecked may morph into actual Rogues).  Where Mavericks may be either understated or charismatic, Rogues tend to be predominantly charismatic and may be powerful demagogues.  Very often they are populist juggernauts or else infighters who have figured out how to dominate within (and beyond) the rules of the system.

These are people who may reach a position where they can defy the Establishment unless and until they are somehow checked or else may come to dominate it.  They can be useful in time of war as a military type if pointed toward an enemy and then kept on a short leach by a strong and well-established system (it is less clear what to do with them when the war is over).  Regardless of whether they are in business, the military, politics, or policy, they must never be allowed to take over or dominate.

Individuals can begin as Rogue insurgents and end up as Conventionalist Establishment types living off of reputations of bringers of change.

Conclusion
There you have it.  This is by no means a comprehensive list of “types” found in the Establishment: there are also Apostates—disillusioned true believers, idealists, and utopians who may go on to become strong critics of their former programs. There are Whistle-Blowers, a hugely important category that is even more universally despised these days than the Good Maverick. Most obviously, as a functional category, there are Principles—presidents, senators, representatives, cabinet members, department heads, and other high-level appointees.

Finally, there is also a functional category or type that I call Hidden Hands or the Opaque Player as working titles. These are quiet, omnipresent high-level advisors of the inner circle who may be team players or self-interested individuals (this may be the type that Henry Adams characterizes as “masters of the game for the sake of the game,” but may equally be loyal and dedicated public officials). In some cases their true beliefs and motives are unknown outside of their immediate circle and sometimes are not fully known even there. Some of this kind never show their ideological hand publicly, and their views may only be inferred by looking at the leaders they handle. They may be great public servants, true believers, or low-key, high-level adventurers or even careerists. They are the “hidden hands” of administrations.

Regardless of motives, the Opaque Players are typically the “smartest kid in the room” (and in the Establishment generally) and may be a handler of a president or else a henchman or a behind-the-scenes whip or button pusher on his/her behalf. They may be the real “power behind the power,” and were and are sometimes women. In our system, they are often lawyers. They know how to “work the system” and get things done and may be more responsible for implementing a program or agenda than the president him/herself. They may be a Chief of Staff or a personal/unofficial advisor of the highest level in the executive. This is the type who has the ear of the leader—has continual access—and in most administrations, is one of the few who is able and positioned to speak the unvarnished truth to his/her boss. They are able to deliver bad news to the president and offer immediate advice. Not elected, they may be the most powerful people in the government in a practical sense and under a weak leader may be a de facto chief executive.

Early Modern examples of this type may include Thomas Cromwell and Cardinals Richelieu and Wolsey. In our own tradition, Elihu Root may be an example of this type. Power is fluid in a robust system, and this type may be far broader an less apparent than suggested by this definition. As with successful conspiracies, we may never know who the greatest Opaque Players of history were. There is also a lover level version of this kind that may act as a personal emissary, lobbyist, or representative, of the president, a person who speaks with the approved authority of his/her boss (Thomas “Tommy the Cork” Corcoran might be an example).

I am not sure whether this scheme holds any water or if I have even interpreted my own ideas correctly or applied them accurately in terms of analyzing historical leaders and advisors (below).  It is still a very nascent work in progress and I just wanted to get it out there for the consideration of others.  Again, I wrote this very quickly, so please excuse any creative grammar/mechanical mistakes.

Addendum, November 30, 2020: The Most Dangerous Type: The Hyper-Competent True Believer
Like all extremists True Believers differ from one another in the details of their beliefs (fascists and Marxists are both tribalists). Members of this type are not mere careerists, although they are oftentimes the most successful in their field. Unlike simple careerists, they are driven by unquestioned belief and an unexamined certainty in that belief. During periods when their outlook is out of season, they linger in communities of the like-minded. They are not personally corrupt; they seek to implement a program or policies favorable to their beliefs. They are quick to dismantle existing structures, traditions, and precedent that stand in their way, and are therefore not traditional conservatives, but a genera of radical, even when their outlooks are on the right. They may implement what they regard to be traditional views through activist, radical means. They are not in it for personal gain, but rather for actualizing a personal vision in the public sphere, although they will quickly and opportunistically exploit a corrupt leader or regime to further their cause.

Typically they are smart in a focused, often technical sense. Many are at the top of their class in their chosen field, such as the law. They are narrowly brilliant, but may believe in simplistic or naive religious or utopian outlooks. Because of their conspicuous brilliance, they attract young acolytes who regard them to be ingenious, legendary. They inspire fierce loyalty in proteges.

Because of their extraordinary, laser-like intelligence, they may become overconfident and overestimate their abilities in other areas and may fail spectacularly in these (and their ideas/programs may likewise fail spectacularly). They are unlikely to change their views in light of demonstrable failure of their ideas and will construct powerful rationalizations about why their ideas fail. They are therefore at base, irrational in their views, in spite of appearing to he rational and confident. They will deny, rationalize, and transfer the failure of their larger outlook rather than concede to reality. On a related note, there is no practical difference between someone who is irrationally tied to a position and someone who is ideologically wedded to it.

True believers may be opaque or conspicuous players. They are purely conventional it outlook, although they are tactically innovative to the point of genius.

True Believers are the most dangerous people in government.

Addendum, July 2022: The Good People
One of life’s ironies is how people with extreme ideologies and outlooks can be nice people and how people with enlightened outlooks can be horrible human beings. Hitler loved dogs and his secretaries loved him; Churchill’s secretaries hated him (admittedly, Churchill had some ugly views on race and imperialism). William O. Douglas was a holy terror to his law clerks and yet enshrined the right of privacy.

The former category I refer to as The Good People, or those who may be charming, engaging, honorable, loving, polite, and warm as individuals, but who pursue the most illiberal or otherwise destructive of policies in their official capacities. They are often a subset of True Believers. At best, they may be genuinely good people tragically caught up in circumstances and enlisted in a bad cause. It is the sort of person Henry Adams was describing when he wrote, “It is always the good men who do the greatest harm in the world” (he was writing about Robert E. Lee). As Graham Greene writes in The Quite American, “He comes blundering in, and people have to die for his mistakes… God save us from the innocent and good.” (Graham Greene, The Quiet American, New York: Bantam, 1957, quoted by Arthur Schlesinger in Robert Kennedy and his Times, Boston: Houghton Mifflin Company, 1978, 461).

Historical Examples
In order to flesh-out these categories beyond mere criteria, consider the historical examples below.  This is nothing more than a shot-from-the-hip scattershot of opinion.

  • Theodore and Franklin Roosevelt were Maverick presidents who set the tone of the Establishment of their time.
  • George C. Marshall and Dwight D. Eisenhower were military Establishment Types. An imaginative combat commander like Matthew Ridgway was Good Maverick subordinate to them. 
  • Churchill had characteristics of a Rogue, Maverick, and a conservative imperial Establishment Type.
  • Dynamic combat officers like Curtis LeMay, Douglas MacArthur, and George Patton, were extreme, frequently effective military Mavericks bordering on Rogues.  MacArthur was a cooperative Maverick Establishment Type during the rebuilding of Japan but became something like a partisan Rogue during the final phases of his command in Korea (he did return to the Untied States after being relieved, so he still acknowledged civil authority above him).
  • J. Edgar Hoover was a pernicious Rogue who devised a departmental Establishment that exerted influence over the entire government.
  • Huey P. Long was a populist Rogue of a state government and within the Democratic Party.
  • Robert Moses was an Establishment Rogue of the New York Port Authority.
  • Joseph McCarthy was a cynical careerist-turned-Rogue.
  • Heinrich Himmler and Albert Speer are paragon examples of the careerist—the person who does whatever is necessary to advance himself/herself to get ahead, regardless of the regime.
  • Lyndon Johnson seems to have elements of all of the above categories (except perhaps the Conventionalist).  He availed his administrations of Mavericks and Establishment Types and seemed to have both envied and despised the Eastern Establishment.
  • Richard Nixon was an odd combination of a highly individualized (almost outsider), hardball player with interesting contradictions. Like Johnson, he envied and despised the Establishment and eventually became an unhinged Rogue who, at the end of his administration, had sufficient control to resign.
  • Napoleon was a strange amalgam of an adventurer, idealist, and realist that gives him qualities of a Maverick, Rogue, and a creator of an Establishment (that collapsed with him).  One problem with a leader who rules by force of personality (other examples would be Cromwell and Castro) is that the system they put in place is difficult to sustain after them, thus creating problems of succession.
  • Adolf Hitler was the most pernicious of Rogues. He created and presided over a regime based on an extreme crackpot ideology, ethnic phobia, myths of racial warfare, and bad science. The Weimar Republic before him was a weak and ineffectual Establishment.
  • Fidel Castro was a popular rebel who became a Rogue under the guise of a utopian revolutionary.
  • Josef Stalin and Mao Zedung were pernicious utopian Rogues.
  • Howard Hughes was a good Maverick business type and increasingly psychotic.
  • Preston Tucker was a Good Maverick business type.
  • Jane Jacobs was a Good Maverick as independent intellectual.
  • George Washington was an aristocratic Establishment Type who devised the role of the president and demonstrated a Cincinnatus-like respect for the system by voluntarily relinquishing power at the end of two terms.  His key advisor, Alexander Hamilton, was the prototype American Maverick advisor.
  • Oliver Cromwell was a utopian Rogue and Charles II was a regal Establishment Type (here we can see outlook driving the respective roles). 
  • Otto von Bismarck was a conservative Maverick who created a domestic social welfare state and a military Establishment that only he could control. 
  • Helmutt von Moltke, the Elder, was a military Establishment type who also devised revolutionary ideas within a strict organizational framework.
  • Elihu Root was a Hidden Hand/Opaque Player as well as an Establishment Type.

A Few Words about a Few Words (or: Get Your Neologisms off My Lawn!)

Michael F. Duggan

May Noam Chomsky forgive me for my snobbery.

I know that this stuff is all artificial, but to one degree or another, all wordsmiths are curmudgeons about usage.  I will leave it to others to say whether or not I qualify as a wordsmith, but I certainly have opinions on the use of words. There are people who can discourse at length about why the Webster’s International Dictionary 2nd ed. is superior to subsequent editions (it is), why the Elements of Style is “The Bible” (it is), or why they rely on The Associated Press Stylebook and Libel Manual.  More generally, everybody who writes or reads has favorite and least favorite words and preferred/least preferred usage.  Likewise, some of us have words and usages that are fine in some contexts but insufferable in others. 

There are pretentious neologisms, self-consciously trendy or generational hangnails of usage, unnecessarily technical social science or other academic jargon that has crept into the public discourse (and don’t get me started about hacks like Derrida and Heidegger), and the overuse and therefore the tweaking of existing words.  Below is a partial list of words and phrases that appeal to me in a similar sense as fingernails drawn down a dry chalkboard.  This posting is written in a tone of faux smugness/priggishness and is not intended to be mean, so don’t take it to heart if you have ever used or otherwise run afoul of any of the offending terms. Below that is a slightly hysterical grouse I wrote a year or two ago about the recent appropriation of the word “hipster.” 

Enjoy (if that’s the right word).

  • All you need to know about… Click bait for people who want to know the bullet points of conventional wisdom on a popular or topical issue.
  • Bad Ass. A term once reserved for outlaw bikers, rogue athletes, some convicts, gang members, other criminal and quasi-criminal types, as well as tough guy soldiers, sailors, and marines. Today it is a marginally hip compliment used to describe or encourage someone modestly able to assert himself/herself or whose delicate ego could use a boost. When used as an adjective, it is a more self-dramatizing, mildly profane version of “cool” (see below).
  • Begs the question. This is a term correctly used in logic and forensics to describe an argument or reply that avoids addressing or answering the issue at hand.  Today you will likely hear it on the news meaning something like “frames,” “poses,” “suggests,” or “implies the question…” as in the statement: “The result of today’s election begs the question of whether the nation is suffering from mass psychosis or merely a bizarre cult-like phenomena.”
  • Bucket List. A list of things to cross off in order to know that it is time to die.
  • Cool. First used around the time of WWI, this is a ubiquitous, burned-out synonym for “good” or “desirable” in a context of pop culture conformity. A common term of reverse snobbery indicating approval and therefore social acceptance among “cool” people (including, presumably, the speaker bequeathing approval/acceptance) that is mostly identical to the post-1990s use of the world “hip” (see rant below).  Like “hip,” it was once a rebellions alternative to more conventional terms of approval like “good.” Unless I am describing to a day below 60 degrees, soup that has sat around too long, or a certain kind of modern jazz, I am attempting—mostly unsuccessfully—to wean myself off of this insipid, reflexive word. It is still preferable (and more durable) than the comically dated groovy. There is another, related usage, characterizing a kind of effortless nonchalance and grace in a person usually understood in terms of mass culture desirability and approval.
  • DMV. Madison Avenuesque abbreviation for the “District of Columbia, Maryland, Virginia” region. I associate it with the “Department of Motor Vehicles.” If I ever become hip (modern usage) enough to voluntarily use this term without derision, I hope to be struck by a big Damned Motor Vehicle immediately thereafter. Not actually used by people from the greater Washington, D.C. area.
  • Fetishize. Verb form of fetish: to make something the object of a fetish. An obsession. To abnormally or inappropriately ascribe more importance or interest to a thing than is necessary or deserved. Fetishize is commonly used by people who fetishize words like “fetishize.”
  • Great Recession. A lazy, pseudo-historical term used by pundits in the corporate media to characterize the depression that followed the collapse of 2008 and the economic conditions that persist today throughout much of the country. A recession is a cyclic downturn in the economy; a depression is an economic crisis caused by structural flaws in the economy. The present crisis will continue, even if some economic indicators improve, as long the the structural defects in the economy remain uncorrected. The underlying causes of the crisis of 2008 are still very much in place as of this writing (2019).
  • Hero. A good word, especially when used in discussions on Greek tragedy and in literary discussions generally (e.g. the Byronic Hero, the Hemingway Code Hero, etc.). Also a good word when used sparingly, quietly, and modestly and when it does not command or demand the instant, uncritical adoration or conformity or the surrendering of opinions not shared by the majority (e.g. the silencing of a person who speaks out against a harmful or ill-conceived policy because it might affront the honor and dignity of a person who acted with courage in furtherance of such policy). There is much that is heroic in the human heart and in noble, selfless—especially self-sacrificing—acts that flow from it. In recent years it has been overused in a way that is manipulative or distracting by the corporate media. In the First World War, this kind of usage was derided as “newspaper patriotism” by those who actually served. Literary critic and WWII Marine Corps combat pilot, Sam Hynes refers to this kind of usage as a “windy word.” Ben Franklin writes about the dangers of “the hero” as a historical type. Others have written and spoken thoughtfully of the peril of nations in need of heroes and of the uncritical worship of heroes in a hard, ideological sense.
  • I am passionate about ____. An enthusiastic, youthful, way of emphasizing that one cares about something with a depth of feeling beyond the ordinary. Often heard on job interviews to express breathless eagerness.
  • Icon/Iconic Good words in traditional usages (e.g. medieval religious portraiture).  In the modern popular and media usage, the new meaning is something like: universally emblematic of itself; characterizing the empty husk of a thing or person once fresh, original, and important, now reduced to an instantly recognizable cliché or a symbol mostly drained of any content, substance, or meaning. An image from which all depth and nuance has been sucked out leaving a reflexively recognizable reduction (e.g. Rodon’s The Thinker, da Vinci’s Mona Lisa, and Munch’s The Scream). A complex thing reduced to a symbol or to mass culture banality. Ostensibly a compliment, being called an “icon” is in essence the same as being called a lazy, two-dimensional cliché.
  • Incentivize. To give people an incentive to do something, I suppose.
  • Influencer. Presumably someone with disproportional influence relative to their insight, merit, wisdom, and taste, or lack thereof. Precise meaning non apparent.
  • Is that really a thing? A more diffuse way of saying “Really?” or “Is that something people actually do or believe?”
  • Juxtaposition.  Use sparingly.  Otherwise it suffers from some of the complaints against “paradigm” (see below).
  • Look. A word used by pundits on political talk shows before or at the beginning of a sentence for no apparent reason.
  • My Bad. An efficient, if ungrammatical, mea culpa for a minor infraction.
  • Miracle. A term of faith cynically and shamelessly appropriated by the media to describe an event (usually an accident or disaster) where survival or a happy outcome was dramatic, surprising, or unlikely, but well within the realm of possibility without divine intervention.
  • Narrative. A term borrowed from literary criticism and academic history departments meaning a particular ideological or personal explanation, interpretation, or version.  Often used to cast doubt on or call into question an interpretation by implying a self-serving or subjective account (or that there are no “objective” accounts).  Instead of “narrative,” I prefer “interpretation” as a less loaded alternative.  Explanations should be examined for their truth content and not dismissed solely because of a presumed perspective or the inferred state of mind of the narrator (an error of analysis known as psychologism).
  • No worries. This term obviously means “Don’t worry about it” or “No big deal/no problem.” It was appropriated from the Aussies around or just before the turn of the twenty-first century. Do not use unless you are Australian and only if followed by “mate.”
  • Paradigm/Paradigm Shift/Paradigmatic. A term that crept out of the philosophy of science of Thomas Kuhn.  A favorite termof hack academics and others trying to sound smart (see “juxtaposition”).  Outside specific academic usage, one should probably avoid this word altogether (and even when writing technically, “frame” or “framework” are less pretentious and distracting).  If someone puts a gun to your head and commands you to use the adjective form, try “paradigmic” (parr-uh-dym’-ik)  I don’t know whether or not it is a real word, but it still sounds better than “paradigmatic,” arguably the most offensive word in modern English (and your example might help start a trend for others under similar duress).
  • Reach[ing/ed] out to… Just call the guy; reaching out to him doesn’t make you a better person any more than “passing away ” makes you any less dead than someone who has simply died.
  • So, … A horrible word when said slowly and pronounced “Sooo…” at the beginning of a spoken paragraph or conversation or when starting to answer a question.  An introductory pause word common among people born after 1965. It is a word that allows user to sound both didactic and flaky at the same time. A person who uses “So…” this way throughout all but the shortest of conversations can make some listeners from previous generations want to throw a heavy object at the nearest wall.
  • Snarky. An old term that came back in the 1990s. Just a weaker and less efficient (two-syllable) way of saying “snide.”
  • Society. A decomposing whale carcass left by the tide at the mean water mark, thus denoting a certain time and place. Although silent, it is depicted either as malodorous or once-great. The mean of dominant opinion, mores, and public opinion.
  • Spiritual/Spirituality. A word commonly (and confidently) thrown down as a solemn trump card in discussions on metaphysics but which means nothing more than a vaguer form of “religiosity” without a commitment to specific beliefs and obligations. It is a word that allows the speaker to elevate him/herself above the conformist throng of the more conventionally faithful and makes him/her seem deeper, more individualistic, and mysterious to the unwary.
    It is also an ill-defined projection of a speaker’s personality into the realm of metaphysics. It is the result of someone (often an adolescent) who wants to believe in something otherworldly when exiting belief systems are found wanting, implausible or are unacceptable whole cloth. An imprecise word whose imprecision gives it a false authority or gravitas when any number of more precise words from philosophy, psychology, or theology would suffice (e.g. animism, cosmology, deism, epiphany, exaltation, inspiration, pantheism, neo-paganism, theism, transcendentalism, and New Agey cults and religions, etc.). Although the definition of words is seldom important in good faith critical discussions, one should always ask for a concise definition of spirituality whenever it comes up in conversation. Note: there may be a narrow context or range of usage where this word is appropriate, such as referring to a priest or minister as a spiritual advisor.
  • Please Talk About... A favorite, if inarticulate, invitation of radio and television interviewers with insufficient knowledge or information to ask actual questions of an expert guest, thus allowing interviewees to spin things in a way that is favorable to their perspective (e.g. “Your company is responsible for the recent catastrophic oil spill that has killed all of the marine life in the region. Talk about the safety precautions it has put in place since the spill.”).
  • Text. A noun meaning a written work or a portion of writing.  It is pretentious as hell, and I believe an inaccurate word.  Human beings do not read text. We read language.
  • Thinking outside of the box. An inspirational “inside the box” cliche expressing a good idea. Not being bound by a limiting conventional framework (or, in the narrow and correct usage in science/philosophy of science, a paradigm). Science progresses by advancing to a point where it smashes the existing frame (e.g. Special and General Relativity superseding the Newtonian edifice in the early twentieth-century). Ironically, this term is often used by conventionalist businessmen/women who somehow think of themselves as mavericks and innovators. A term favored by motivational speakers, leaders of focus groups, and other manic careerist types and their adjuncts.
  •  To be sure. A common infraction even by important historians, social commentators, and novelists when conceding a point they consider to be unimportant to the validity of their overall argument (usually at the start of a paragraph).  No less a writer than Henry Miller has succumbed to “to be sure.” It was fine in Britain 100-150 years ago, but is hard to stomach today because of its confident overuse and how it strikes the ear as old fashioned. Consider instead: “Admittedly,” “Certainly,” “Of course,” “Albeit” (sparingly), and other shorter and less smug-sounding terms. It is still an acceptable mainstay of pirate talk however, and, to be sure, one can easily imagine its use by Wallace Beery as Long John Silver in the 1934 movie version of Treasure Island. International Talk like a Pirate Day is September 19.
  • Trope. An okay word that is overused to disparagingly characterize an overused story. Use it perhaps three times in your life.
  • You as well. A less efficient way of saying “You too.” A classic illustration of middle class “syllable multiplication” (see Paul Fussell’s Class). I think people use this to add variety to their usage rather than rely solely on the less satisfying “You too.” Unconsciously, people might think that a simple sentiment may be made somehow more interesting by expressing it with more words/syllables (e.g. using “indeed” rather than “yes” in simple agreement). In a similar sense, syllable multiplication gives the illusion of adding content. A similar phenomenon is the pronunciation of some multi-syllable words with emphasis on the last syllable, giving the impression of two words. (e.g. “probably” spoken as “prob-ub lee'” with emphasis on the suffix).
  • You’re very welcome. An in-your-face, parrot or mirror-like reply to “Thank you very much.”* Common among people under 40, it may be used earnestly, reflexively, or to mock what the young perceive to be the pretentious hyperbole of older people who have the unmitigated gall to add the intensifier “very” when a simple “thank you,” “thanks, ” or understated nod would suffice. Even in a time when “very” is very much overused, one should take any sincere variation of “Thank you” for how it was intended—as a gift of civility and etiquette freely offered—and a mocking or mildly snarky reply of “you’re very welcome” is at least as smug as this blog posting. *Note: The word very should never be used in writing as an intensifier (there are some acceptable usages such as “by its very nature”).
  • Weaponize. To give something an added function by making it into a weapon or something to be turned against another person (e.g “She effectively weaponized the stapler by throwing it at him”).

Finally, there is a much-maligned word that I would like to resurrect or at least defend: Interesting. If used as a vague and non-committal non-description or non-answer, it should be avoided unless one is forced into using it (e.g. when one is compelled by circumstances to proffer an opinion or else be rude or lie outright; in this capacity, the guarded “interesting” never fools anybody and is usually interpreted as a transparent smokescreen for a negative opinion). However, for people who like ideas and appreciate the power and originality of important concepts, “interesting ” can be used as an understated superlative—a quiet compliment, a note of approval or admiration that opens a door to further explanation and elaboration.

Essay: On the Hip and Hipsters

Present rant triggered by a routine stop at a coffee shop. 

I appreciate that language evolves, that the meanings of words change, emerge, evolve, disappear, diverge, procreate, amalgamate, reemerge, splinter-off, become obscure, and overshadow older meanings, especially in times of rapid change.  I am less sanguine about words that seem to be appropriated (and yes, I know that one cannot “steal” a word) from former meanings that still have more texture, resonance, authenticity, and historical context for me in their original usage.

For example over the past decade, and probably going back to the 1990s, the word “hipster” has taken on a new, in some ways inverse, but not unrelated meaning to the original. My understanding of the original meaning of “hipster” was a late 1930s-1950s blue collar drifter, an attempted societal drop-out, a modernist descendant of the romantic hero, and borderline antisocial type who shunned the “phoniness” of mainstream life and commercial mass culture and trends and listened to authentic (read: African-American) jazz—bop—(think of Dean Moriarty from On the Road).1 

He/she was “hip” (presumably an evolution of 1920s “hep”)—clued-in, disillusioned—to what was really going on in the world behind the facades and appearances. This meaning stands in contrast to today’s idea of “hip” as being in touch with current trends—an important distinction. The hipster presaged the beat of the later 1950s who was more cerebral, contrived, literary, and urban. In the movies, the male of the hipster genera might have been played by John Garfield or Robert Mitchum. In real life, Jackson Pollock will suffice as a representative example. Hipsters were typically flawed individuals and were often irresponsible and failures as family people. But at least there was something authentic and substantial about them as an intellectual type.

By contrast, today’s “hipster” seems to be self-consciously affected right down to the point of his goateed chin: consciously urban (often living in gentrified neighborhoods), consciously fashionable and ahead of the pack, dismissive of non-hipsters (and quiet about his/her middle-to-upper-middle class upbringing in the ‘burbs and an ongoing childhood once centered around play dates), a conformist to generational chauvinism, clichés, and dictates.  Today’s hipster embodies the calculation and trendiness that the original hipsters specifically stood against (they were noticed, not self-promoted).  Admittedly, hip talk was adopted by the Beats and later cultural types and elements of it became embedded in the mainstream and then fell out of favor. Today it seems affected and corny (as Hemingway observes “…the most authentic hipster talk of today is the twenty-three skidoo of tomorrow…”).2

I realize that this might sound like a “kids these days” grouse or reduction—and I hope it is not; upon the backs of the rising generation ride the hopes for the future of the nation, our species, and the world. I have known many young people—interns and students—the great majority of whom are intelligent, serious, thoughtful, and oriented toward problem solving and social justice. There is also a strong current toward rejecting the trends of previous generations among them. The young people these days have every right to be mad at what previous generations have done to the economy and the environment and perhaps the hipsters among them will morph into something along the lines of their earlier namesake or something better.

If not, then it is likely that the word will continue to have a double meaning as the original becomes increasingly obscure or until another generation takes it up as its own.

  1. For the best analyses and commentary on the original meaning of “hip” and “hipster,” see Norman Mailer’s “The White Negro,” “Reflections on the Hip,” “Hipster and Beatnik,” and “The Hip and the Square” in Advertisements for Myself.
  2. See “The Art of the Short Story,” in The Short Stories of Ernest Hemingway, Hemingway Library Edition, 2.

The Wisdom and Sanity of Andrew Bacevich

Book Review (Unedited)

By Michael F. Duggan

Andrew J. Bacevich, Twilight of the American Century, University of Notre Dame Press, 2018. 492 pages.

What do you call a rational man in an increasingly irrational time?  An anomaly?  An anachronism?  A voice in the wilderness?  A glimmer of hope? 

For those of us who devour each new article and book by Andrew J. Bacevich, his latest volume, Twilight of the American Century—a collection of his post-9/11 articles and essays (2001-2017)—is not only a welcome addition to the oeuvre, but something of an event.  In these abnormal times, Bacevich, a former Army colonel who describes himself as a traditional conservative, is nothing short of a bomb thrower against the national security establishment.  The ominous title of the present collection does not look out of place among the apocalyptic titles of a New Left history professor (Alfred W. McCoy/In the Shadows of the American Century), an apostate New York Times journalist flirting with socialism (Chris Hedges/America the Farewell Tour), an economics professor from Brandeis University (Robert Kuttner/Can Democracy Survive Global Capitalism?), and a prophetic legend (Jane Jacobs/Dark Age Ahead).

The new book was worth the wait.    

A collection by a prolific author with broad, deep, and nuanced historical knowledge and understanding, Twilight of the American Century lends powerful insight over a wide territory of issues, events, and personalities.  The brevity of the pieces makes it possible to pick up the book at any point or to jump ahead to areas of personal interest.  Bacevich, a generalist with depth and a distinctive voice, offers what is without a doubt one of the most sensible takes on foreign policy and military affairs today.

In terms of outlook, he is a throwback to a time when “conservatism” meant Burkean gradualism—a careful, moderate, outlook advocating terraced progress over the jolts and whiplash of radical change and destabilizing shifts in policy.  This perspective is based on a realistic understanding of human nature, the idea that people are flawed and that traditions, the law, strong government, and the separating of power are necessary to accommodate—to contain and balance—the impulses of a largely irrational animal and what Peter Viereck calls the “satanic pride” of the “unchecked ego.”  

As regards policy, traditional (read “true”) conservatism is fairly non-ideological.  It holds that rapid fundamental change results in instability and eventually violence.  Those who have studied utopian projects or events like the Terror of the French Revolution, the Russian Revolution, or the Cultural Revolution may realize that this perspective is on to something.  Traditional conservatives like Viereck believe that a nation should keep those policies that work while progressing gradually in areas in need of reform.  They also embrace progressive initiatives when they appear to be working or when a more conservative approach is insufficient (Viereck supported the New Deal).  The question is whether or not gradualistic change is possible in a time of great division in popular politics and lockstep conformity and conventionalism among the members of the Washington elite. 

From his shorter works as well as books like The Limits of Power, Washington Rules, and America’s War for the Greater Middle East (to name a few) one gets two opposite impressions about Bacevich and his perspective.  The first is that he never abandoned conservatism, it abandoned him and became something very different—a bellicose radicalism of the right that is odious to true conservatives (Viereck describes this as “the smug reactionary misuse of conservatism”).  The second is more personal, that, like a hero from Greek tragedy, he realized in midlife that what he had believed to be true was wrong.  At the beginning of his brutally honest and introspective introduction to the present book, he writes:

“Everyone makes mistakes.  Among mine was choosing at age seventeen to attend the United States Military Academy, an ill-advised decision made with little appreciation for any longer-term implications that might ensue.  My excuse?  I was young and foolish.”

The implication of such a stark admission is that when one errs so profoundly, so early in life, it puts everything that follows on a mistaken trajectory.  While this seems to be tragic in the classical sense (and is certainly “tragic” in more common usage as a synonym for personally catastrophic), it also appears to be what has made Bacevich the powerful critic he has become. After all, to the wise, truth comes out of the honest realization and admission of error.  His previous “erroneous” life gives him a template of uncritical assumptions against which to judge the insights hard bought through experience and independent learning after he arrived at his epiphany, his moment of peripetia.  The “mistake” (more like an object lesson in harsh self-criticism) and the realizing of it with such clarity of vision and disillusioned historical understanding make him the superb and principled critic he has become (and to be frank, his career as an army combat officer gives him certain “street creds” that cannot be easily dismissed and which he could not have earned elsewhere).  It seems unlikely that Bacevich would have arrived at his current perspective as just another university professor.  

One can only speculate about whether or not he makes the truth of his early “error” out to be more tragic than it really is.  A more charitable reading is that this admission casts him as the hero in a Popperian success story of one who has taken the correct lessons from his own experiences, from trial and error.  One can hardly imagine a more fruitful intellectual rising in midlife.  It is also difficult to imagine how he would have arrived at his depth as a mature commentator via a more traditional academic route.  Apologies, I draw close to psychologizing my subject.

In order to be a commentator of the first rank, a writer must know human nature—its attributes as the paragon of animals, its foolishness, its willfulness, its murderous animal irrationality—and must have judgment and a sense of circumspect that comes from historical understanding.  You must know when to criticize and when to forgive, lest you become mean.  Twain was a great commentator because he forgives foibles while telling the truth.  Mencken is sometimes mean because he does not always distinguish between forgivable failings or weakness and genuine fault and exempts himself from his spot-on criticism of others. 

An emeritus professor at Boston University, Bacevich knows history as well as any contemporary public intellectual and better than most.  His historical understanding far exceeds that of the neocon/lib critics and policymakers of the Washington foreign policy Blob.  He carries off his criticism so effectively, not by a lightness of touch, but by frank honesty and frequent humor and irony.  It is apparent from the first line of the book that he holds himself to the same standards and one senses that he is his own toughest critic. His introduction is self-critical to the point of open confession.  Bacevich is tough, but he is one of those rare people who is able to keep himself unblinkingly honest by not exempting himself from the world’s imperfections. 

He dominates discussion then, not by raising his voice, but by reason and clarity of vision, sequences of surprising observations and interpretations that expose historical mythologies, false narratives, and mistaken perceptions, with an articulate and nuanced, if at times dour voice.  Frank to the point of bluntness, he calls things by their proper name and has what Hemingway calls “the most essential gift for a good writer… a built-in, shockproof, bullshit detector,” the importance of which goes double if the writer is a historian.  In less salty language, and in a time where so many commentators tend to defend questionable positions, Bacevich’s articles are a tonic because he simply tells the truth. 

In his review of Frank Costigliola’s The Kennan Diaries, he flirts with meanness and overkill, but perhaps I am being oversensitive.  Like many geniuses—assuming that he is one—Kennan was an eccentric and a neurotic, and it is all-too easy to enumerate his many obvious quirks (if we judge great artists, thinkers, and leaders by their foibles and failures, one can only wonder how Mozart, Beethoven, Byron, van Gogh, Churchill, Fitzgerald, Frank Lloyd Wright, Hemingway, and Jackson Pollock would fare; even The Bard would not escape whipping if we judge him by Henry VIII).  As a Kennan partisan who tends to rationalize the late ambassador’s personal flaws, perhaps I am just reacting as one whose ox is being gored. I am not saying that Bacevich gets the facts wrong, only that his interpretation lacks charity. He rightly calls out Kennan’s views on race.*   

This outlining of Kennan’s shortcomings also struck me as ironic and perhaps counterproductive in that Bacevich is arguably the closest living analog or successor to Mr. X as a commentator on policy both in terms of a realistic outlook and in the role of historian-as-Cassandra who is likely to be right and unlikely to be heeded by the powers that be.  Both men fill the role(s) of the conservative as liberal-minded realist, historian as tough critic, and critic as honest broker in times desperately in need of correction (i.e. a sane man in insane times).  As regards temperament, there are notable differences between the two: Bacevich strikes one as a stoical (Augustinian?) Catholic where Kennan, at least in his diaries, comes across as a Presbyterian kvetch and perhaps a clinical depressive with some ugly social views.  Like Kennan too, Bacevich is right about many, perhaps most things, but not about everything; perfection is too much to ask of any commentator and we should never seek spotless heroes.  The grounded historical realism and clear-sighted adumbrate of both men is immune to the seduction of bubbles a la mode, the conventionalist clichés of liberal interventionism and neoconservatism. Such insight is a rare gift that deserves our consideration and admiration.

The book is structured into four parts: Part 1. Poseurs and Prophets, Part 2. History and Myth, Part 3. War and Empire, and Part 4. Politics and Culture.  The first part is made up of book reviews and thumbnail character studies.  If you have any sacred cows among the chapter titles or in the index, you may find your loyalties strongly tested and if you have anything like an open mind, there is a reasonable chance that your faith in a personal favorite will be destroyed.  Charlatans, true believers, puppet masters, and bona fide villains, as well as mere scoundrels and cranks including the likes of David Brooks, Tom Clancy, Tommy Franks, Robert Kagan, Donald Rumsfeld, Paul Wolfowitz, Albert and Roberta Wohlstetter, and, yes, George Kennan, all take their lumps and are stripped of their New Clothes for all to see. Throughout the rest of the book there is a broad cast of characters that receive a similar treatment. 

This is not to say that Bacevich does not sing the praises of his own chosen few including Randolph Bourne, Mary and Daniel Beard, Christopher Lasch, C. Wright Mills, Reinhold Niebuhr, and William Appleman Williams, but here too is he completely honest and provides a list of his favorites up front in his introduction (his inclusion of the humorless, misanthrope, Henry Adams—another Kennan-like prophet, historian, and WASPy whiner—is both surprising and not).   

Where to begin?  Bacevich’s essays are widely ranging and yet embody a consistent outlook.  Certain themes inevitably overlap or repeat themselves in other guises.  He has a Twain-like antipathy for frauds and fakes and is adept at laying bare their folly (minus Twain’s punchlines and folksy persona).  The problem with our time is that these people have come to dominate and their outlooks have become an unquestioned orthodoxy among their followers and in policy circles in spite of a record of catastrophe that promises more of the same. 

To read Bacevich’s criticism is to realize that things have gone well beyond an Establishment wedded to an ideology of mistaken beliefs and into a realm of group psychosis.  One comes away with the feeling that the Establishment of our time has become a delusional cult beyond the reaches of reason and perhaps sanity.  Hume observes that “reason is the slave of the passions” and it is striking and frustrating to read powerful arguments and interpretations that are unlikely to change anything.  If anything, Bacevich’s clarity of vision, common sense, and impressive historical fluency tend to disprove the observation attributed to Desiderius Erasmus that “in the land of the blind, the one-eyed man is king.”  Rather, in a kingdom of the blind, a clear-sighted person will be ignored as a lunatic or else a marginal threat. If the kingdom is a theocracy, he will be burned as a heretic, if caught.

Are there any criticisms of Bacevich himself?  Sure.  For instance, one wonders if, like a gifted prosecutor, at times he makes the truth out to be clearer than it may really be.  In this sense his brilliant Washington Rules is a powerful historical polemic as well as an interpretive short treatment of a period (less pointed surveys of the Cold War would include Robert Dalleck’s The Lost Peace, Tony Judt’s Postwar, and James T. Patterson’s Grand Expectations).  Thus it is fair to regard him as a historian with a strong jab (again, this is not to suggest that he is wrong or even that he exaggerates).  Or to put it in another, perhaps more accurate way, Professor Bacevich is one of the great interpretive historians of our time, it just that the cynicism and abnormality of the period since 1945, and especially since 1989, that make an honest accounting seem polemical. Getting history right is important and whether one is an interpretive historian or a two-fisted counter-puncher (or both) is ultimately trivial.

Also, given the imminent threat posed by the unfolding environmental crises, I found myself hoping that he would wade further into topics related to climate change—the emerging Anthropocene (i.e. issues of population/migration, human-generated carbon dioxide, loss of habitat/biodiversity, soil depletion, the plastics crisis, etc.)—and wondering how he might fit in with commentators like John Gray, Elizabeth Kolbert, Jed Purdy, Roy Scranton, Edward O. Wilson, and Malthus himself.

The only other criticism is that Bacevich is so prolific that one laments not finding his most recent articles in this collection.  This is obviously a First World complaint.

Unlike a singular monograph, there is no one moral to this collection but a legion of lessons: that events do not occur in a vacuum—that events like Pearl Harbor, the Cuban Missile Crisis, and 9/11, and the numerous U.S. wars in the Near East all had notable pedigrees of error—and that bad policy in the present will continue to send butterfly effect-like ripples far into the future; that the stated reasons for policy are never the only reasons and often not the real ones; that some of the smartest people believe the dumbest things and that just because you are smart doesn’t necessarily mean that you are sensible or even sane; that the majority opinion of experts is often wrong; that bad arguments sometimes resonate broadly and masquerade as good ones and that without a nuanced understanding of history it is impossible to distinguish between them (even an intimate historical understanding of past events is no guarantee of sensible policy).  If there is an overarching lesson from this book it is that the United States has made numerous wrong turns over the past decades that have put it on a perilous course on which it continues today at an even greater pace: we have topped the great parabolic curve of our national history and are heading down.  Thus the title.

In short, Bacevich, along with Barlett and Steele, and a handful of other commentators on foreign policy, economics, and the environment, is one of the contemporary critics whose honesty and rigor can be trusted.  As a matter of principle, we should always read critically and with an open mind, but in my experience, here is an author whose analysis can be taken as earnest, sensible, and insightful.  He is also a writer of the first order, and the book is a triumph of applied history.

My recommendation is that if you have even the slightest feeling that things are amiss in this nation, its governance and policy, or if you are simply earnest about testing the validity of your own beliefs, whatever they are, you should read this book.  If you think that everything is fine with the country and its policy course, then you should buy it today and read it cover to cover.  After all, there is nothing more dangerous than a true believer and we arrive at wisdom by correcting our mistaken beliefs in light of experience, good faith discussion, and more powerful arguments to the contrary.

Postscript
Having had a chance to read Professor Costigliola’s recent (2023) biography, Kennan, A Life between Worlds, I now believe that Bacevich’s criticisms are entirely warranted.

A Wonderful Life?

By Michael F. Duggan

I have always loved the Capra Holidays classic It’s a Wonderful Life, but have long suspected that it is a sadder story than most people realize (in a similar but more profound sense as Goodbye Mr. Chips).  One gets the impression from the early part of the film that George Bailey could have done anything but was held back at every opportunity.  After watching it last year, I tried to get my ideas about the film organized and wrote the this analysis.

In spite of its heart-warming ending, the 1947 Christmas mainstay by Frank Capra, It’s a Wonderful Life, is in some ways an ambiguous film and likely a sad story. George Bailey, the film’s protagonist played by Jimmy Stewart (in spite of his real-life Republican leanings), is the kind of person who gave the United States it’s most imaginative set of political programs from 1933 to 1945, policies that shepherded the country through the Depression, won WWII, and resulted in the greatest period of economic prosperity from 1945 until the early 1970s. Bailey wants to do “something big and something important”—to “build things” to “plan modern cities, build skyscrapers 100 stories high… bridges a mile long… airfields…” George Bailey is the big thinker—a “big picture guy”—and his father, Peter Bailey, the staunch, sensible, and fundamentally decent local hero. We need both kinds today.

In a moment of frankness bordering on insensitivity, George tells his father that he does not want to work in the Bailey Building and Loan, that he “couldn’t face being cooped up in a shabby little office… counting nickels and dimes.”  His father recognizes the restlessness, the boundless talent and quality, the bridled energy, the wide-angle and high-minded ambition of his son.  Wounded, the senior Mr. Bailey agrees with George, saying “You get yourself an education and get out of here,” and dies of a stroke the same night (his strategically-placed photo remains a moral omnipresence for the rest of the film, along with photos of General Pershing and U.S. presidents to link local events to broader historical currents).

One local crises or turn of events after another stymies all of George’s plans to go abroad and change the world just as they are on the cusp of fruition. Rather than a world-changer, he ends up as the local fixer for the good—a better and more vigorous version of a local hero, a status that confirms his “wonderful life” at the film’s exuberant ending where a 1945 yuletide flash mob descends on the Bailey household, thus saving the situation by returning years worth of good faith, deeds, and subsequent material wealth and prosperity. But what is it that sets George apart from the rest of the town that comes to depend upon him over the years?

At the age of 12 he saves his younger brother Harry from drowning (and by extension, a U.S. troopship in the South Pacific a quarter of a century later), leaving him deaf in one ear.  Shortly thereafter, his keen perception prevents Mr. Gower, the pharmacist (distracted by the news of the death of his college student son during the Spanish Flu pandemic of 1918-1919), from accidentally poisoning a customer.  As a young adult, George’s speculating about making plastics from soybeans by reviving a local defunct factory adds to the town’s prosperity and makes a fortune for his ambitious but less visionary friend, Sam “hee-haw” Wainwright, but not himself.

Other than saving the Building and Loan from liquidation, George’s primary victory is marrying his beautiful and wholesome sweetheart—“Marty’s kid sister”—Mary (Donna Reed) and raising a family.  With a cool head and insight and the help of his wife, they single-handedly stop a run on the Building and Loan in its tracks with their in-hand honeymoon funds.  The goodwill is reciprocated by most of the institution’s investors (one notably played by Ellen “Can I have $17.50” Corby, later Grandma Walton).

From there George goes on to help an immigrant family buy their own house and in fact helps build an entire subdivision for the town’s respectable working class, all the while standing up to the local bully: the cartoonishly sinister plutocratic omnipresence and dark Manachiest counterweight to everything good and decent in town, Mr. Potter (Lionel Barrymore).  Potter is the lingering unregulated nineteenth-century, a caricature of the predatory robber baron, a dinosaur that in modified form cooked the economy during 1920s, resulting in the Great Depression.  Even Potter comes to recognize George’s quality and, with an approach distantly related to charm, unsuccessfully attempts to buy him off (after presenting a brutally accurate assessment/summary of George’s life to date).

During the war, George’s bad ear keeps him out of the fighting (unlike the real Jimmy Stewart who flew combat missions in a B-24), and makes himself useful with such patriotic extracurriculars as serving as an air raid warden, and organizing paper, rubber, and scrap metal drives.  And yet he seems to have adapted to and even accepted his fate of being tethered to the small financial institution he inherited from his father, and therefore the role of the town’s protector. He seems more or less happily resigned to his fate as a thoroughbred pulling a scrap metal wagon.

Were George Bailey just another guy in Bedford Falls or most towns in the United States (or, in Old Man Potter’s words, “if this young man was a common, ordinary yokel”), this would indeed be a wonderful life and indeed for most of us it would be.  Even with all of his disappointments, his life is a satisfactory reply to the unanswerable Buddhist question, “How good would you have it?”  

Taken at face value, George seems to be a great success at the end of the movie.  In case this is not abundantly clear from the boisterous but benevolent 1940s Christmastime riot of unabashed exuberance—a reverse bank run or bottom-up version of a New Deal program or a spontaneous neighborhood Marshall Plan—at the movie’s end. His life’s investment in common decency pays dividends he did not imagine because it was all too close and familiar. Indeed, George’s bailout upstages his brother—now a Medal of Honor recipient—who proclaims, “To George Bailey, the richest man in town.”  This is confirmed in the homey wisdom inscribed in a copy of Tom Sawyer by George’s guardian angel Clarence (a silly device and comic relief in a story about attempted suicide), that “No man is a failure who has friends.”

Of course Clarence is introduced into an already minimally realistic story to provide George with the exquisite but equally silly luxury—“a great gift”—of seeing what would have become of the town and its people without him (although to a lover of jazz, the counterfactual business district of Pottersville—an alternate reality to the overly precious Norman Rockwellesque Bedford Falls—is not completely lacking in appeal, with its hot jazz lounges, jitterbugging swing clubs, a billiards parlor, a (God forbid) burlesque hall, stride piano, and what appears to be a fleeting cheap shot at Fats Waller).

In this Hugh Everett-like alternate narrative device and dark parallel universe, he sees that his wife Mary is an unhappy mouse-like spinster working in a (God forbid) library; that Harry drowned as a child and was therefore not alive in 1944 to save a fully-loaded troop transport in the South Pacific.  Likewise, everybody else in the town is an embittered, antisocial, outright bad or tragic version of themselves relative to the personally frustrating yet generally wonderful G-rated version of George’s wonderful life and town.

The problem is that George is not ordinary. He is no mere careerist, conventionalist, or money-chasing credentialist—he is a quick-thinking, from-the-gut maverick problem-solver with a heart of gold. He is exactly the kind of person we need now, but whom the establishment of our own time despises.  Although harder to spot on sight in our own time, the charming and attractive Mr. Potter’s of the world have won.

In literary terms, George is not a typical beaten-down loser-protagonist of the modernist canon; he is not Bartleby the Scribner, Leopold Bloom, J. Alfred Prufrock, Willie Lohman, William Stoner, or the clueless victims of Kafka, but then neither is his stolid father. George is more akin to Thomas Hardy’s talented but frustrated Jude Fawley or a better version of James Hilton’s Mr. Chips—characters who might have amounted to more had they not been limited or constrained by internal and external circumstances.

Even more so, George is a descendant or modern cousin to the tragic-heroic protagonists of the Greeks and Shakespeare (i.e. a person who could have pushed the limits of human possibility). If only he could have gotten up to bat.  He might have done genuinely great things, had his plans gotten off the ground, had the unforeseen chaos of life and social circumstances not intervened. We have seen what things would have been like without George, but we can only wonder what might have been if he had been allowed to succeed. Let’s see Clarence pull that trick out of his hat.  

Just after breaking his father’s heart by revealing his ambitions, George confides to the older man that he thinks he is a “great guy.”  True enough.  But the conspicuous fact is that the older Bailey is much more on the scale of a local hero, a “pillar of the community”—a necessary type for any town to extinguish the day-to-day brush fires and is therefore perhaps more fully actualized and resigned to his modest role (even though it kills him mere hours later, or was it George’s announcement of his ambitious and desire to leave?).  But George has bigger plans and presumably the abilities to match.

In a perfect world, someone like Mr. Bailey, Sr. would be better (and in fact is) cast in the role to which his son is relegated, even though his ongoing David versus Goliath battles with Potter likely contributed to his early death.  George might have found an even more wonderful life if he had gone to college and law school and then gone to Washington to work for Tommy Corcoran and Ben Cohen drafting legislation, or as a project manager of a large New Deal program, or managing war production against the Nazis and Imperial Japanese.  Instead he admonishes people to turn off their lights during air raid drills.  In a better world, a lesser man could have handled all the relative evils of Bedford Falls. It’s a Wonderful Life is a tale of squandered talent.

Of course an alternative reading is that George is delusional throughout the movie, that he is not as great as we are led to believe, that—like most of us—he is not as good as his biggest dreams would suggest. Desire ain’t talent. But there is nothing in the film to suggest that this is the case. And the film’s ending suggests the opposite (to say nothing of its place in the Capra cannon—compare and contrast the ensemble and the feel of this film with those of Capra’s 1938 version of George S. Kaufman’s You Can’t Take It with You, which also features Jimmy the Raven and a completely lovable Lionel Barrymore).

The moral for our own time is that we need both kinds of Mr. Baileys—father and the son—and it is clear that in spite of numerous local victories, George could have done far more in the broader world (his shorter, less interesting younger brother, Harry, seems to have unintentionally hijacked George’s plans and makes a good go of them: he goes off to college, lands a plumb research position in Buffalo as part-and-parcel of marrying a rich and beautiful wife, disproportionately helps to win a world war, and returns after flying through a snowstorm—amazingly, as the same happy-go-lucky prewar kid brother—complete with our nation’s highest military honor after lunching with Harry and Bess Truman at the White House). George is the Rooseveltian top-down planner and social democrat while Mr. Bailey, Sr., is the organic, Jane Jacobs localist. Harry provides a glimpse at what George might have accomplished.

Even if we accept Capra’s questionable premise that George’s life is the most wonderful of possible alternatives (or at least a pretty darned good one), the ending is not an entirely satisfactory Hollywood ending. George’s likable, but absent-minded, Uncle Billy (Thomas Mitchell) inadvertently places $8,000 dollars (perhaps ten or twenty-fold that amount in 2018 dollars) into Mr. Potter’s hands (a crime witnessed and abetted by Mr. Potter’s wheelchair-pushing flunky, who, without a uttering single word, is arguably the most despicable person in the film—an equal and opposite silent counterpart to the recurring photograph of the late Mr. Bailey, Sr.), and his honest mistake is never revealed nor presumably is the money ever recovered.

Mr. Potter’s crime does not come to light, and George is nearly framed by the incident and driven to despair. Instead of a watery, self-inflicted death in the surprisingly deep Bedford River, he is happily bailed out (Bailey is bailed out after bailing out the town so many times), first by a homely angel and then by the now prosperous town of the immediate postwar.

The fact that his rich boyhood chum, the affable, frat-boyish Sam Wainwright, is willing to advance $25,000 out of his company’s petty cash puts the crisis into broader perspective and makes us realize that George was never really in that much trouble, at least not financially (although the Feds might have found such a large transfer to a close friend with a mysterious $8000 deficit to be suspicious).  Wainwright’s telegram is a comforting wink from Capra himself.  Had he not been so distracted by an accumulation of trying circumstances—the daily slings and arrows of being a big fish in the plunge basin of Bedford Falls (to mix metaphors)—this kindness of Sam’s and the whole town is something that George might have intuited himself, thus averting his breakdown in the first place.  The bank examiner (district attorney?), in light of the crowd’s vouchsafing George’s reputation with a cash flow cornucopia, tears up the summons, and lustily joins in singing “Hark, the Herald Angel Sings.” We know that the townspeople will be paid back with interest greater than a ten-year war bond.

Still, the loss of $8,000 in Bedford Falls in 1945 is a crisis that drove George to the brink of suicide.  This is a movie about hitting one’s limit. The seriousness of the crisis is another manifestation of the scale of events to which George has been consigned. If he had been a manager of wartime industrial production or a 1940s industrialist, like Same Wainwright, a similar deficit would have been a rounding error on a government contract that nobody would have noticed. On a side note, it would have been more appropriate for Heaven to have dispatched its resources to war-ravaged Europe in late 1945, rather than to a single person in a prosperous American town (or was the Marshall Plan really the Clarence Plan?).

At the movie’s end, George is safe and obviously touched by the outpouring of his community and appreciates just how good things really are (and you just know that a scene beginning with Donna Reed rushing in and clearing off an entire tabletop of Christmas-wrapping paraphernalia to make room for the charitable deluge to follow is going to be ridiculously heart-warming). His life may not have been on a grand scale, but the historical course of events that includes him is clearly better than the alternative.

At the film’s end, George is just as local as he was at the beginning. He has been powerfully instructed to be happy with the way things have turned out (why not, it’s almost 1946 in the United States, after all, and the bigger events in the world appear to have turned out just fine, right?).  His wonderful life has produced a wonderful effort to meet a (still unsolved) crisis.  But the thought lingers: could Clarence have showed him an alternative life’s course in which he was able to pursue his dreams? Just imagine what he could have done with 1940s federal funding and thousands of similarly well-intended people to manage—like those who engineered the New Deal, the WWII military and industrial mobilization, and the Marshall Plan. Would his name have ranked along with the likes of Harry Hopkins, Harold Ickes, Rex Tugwell, Adolph Bearle, Raymond Moley, Frances Perkins, Thomas Corcoran, Benjamin Cohen, Averell Harriman, George Marshall, and Franklin and Eleanor themselves?

It is impossible to resist the warmth and decency of this film’s ending (I have watched it in June and July), and I know that this essay has been minute and dissecting in its analysis. But what lessons might we take?  I think the moral to those of us in 2018 is that below the surface of this wonderful movie is the cautionary tale that if we are to face the emerging crises of our own time, we will need a whole Brains Trust worth of George Baileys in the right places and legions of local people like his father throughout the nation.  There is a danger in shutting out the George Baileys of our time or cosigning them to the wrong role. And yet our system as it exists today seems designed to do just that. We must also come to recognize the Mr. Potters of big business, big finance and their minions in the halls of political power who have dominated American public life for the past half-century.  I suspect that they look nothing like Lionel Barrymore.

Geoffrey Parker’s Global Crisis

Book Review

Geoffrey Parker, Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century, Yale University Press, 2014, 904 pages.

Reviewed by Michael F. Duggan

This book is about a time of climate disasters, never-ending wars, economic globalism complete with mass human migration, imbalances, and subsequent social strife—a period characterized by unprecedented scientific advances and backward superstition.  In other words, it is a world survey about the web of events known as the 17th century.  Although I bought it in paperback a number of years ago, I recently found a mint condition hardback copy of this magisterial tome by master historian, Geoffrey Parker (Cambridge, St. Andrews, Yale, etc.), and felt compelled to write about it, however briefly.  I am drawn to this century because of its contrasts as the one that straddles the transition from the Early Modern to the Age of Reason and Enlightenment and more broadly marks the final shift from Medieval to Modern (even before Salem colonists hanged their neighbors suspected of witchcraft, Leibniz and Newton had independently begun to formulate the calculus).

In 1959, British historian H. R. Trevor-Roper, presented the macro-historical thesis of the “General Crisis” or the interpretive premise characterizing the 17th century as an overarching series of crises from horrible regional wars (e.g. the Eighty Years War, the Thirty Years Wars, the English Civil War and its roots and spillover into Scotland and Ireland) and rebellions, to widespread human migration and the subsequent spread of disease, any number of epidemics, global climate change, and a long litany of some of the most extreme weather events in recorded history (e.g. the Little Ice Age).  When I was in graduate school, I had intuited this premise on my own (perhaps after reading Barbara Tuchman’s A Distant Mirror, about the “Calamitous 14th Century”), but was hardly surprised to discover that Trevor-Roper had scooped me by 40 years.

Parker has taken this thesis and generalized it in detail beyond Europe to encompass the entire world to include catastrophic events and change throughout the Far East, Russia, China, India, Persia, the greater Middle East, Africa, and the Americas.  Others, including Trevor-Roper himself, also saw these in terms of global trends and scope, but, to my knowledge, Parker’s book is the fullest and most fleshed-out treatment.  His work “seeks to link the climatologists’ Little Ice Age with the historians’ General crisis—and to do so without ‘painting bull’s eyes around bullet holes.'”  It is academic history, but is well-written and readable for a general audience.  It is well-researched history on a grand scales.  For Western historians, such as myself, the broader perspective is eyeopening and suggestive of human commonality rather than divergence.  We are all a part of an invasive plague species and we are all victims of events, nature, and our own nature.

Although I am generally skeptical of macro interpretive premises that try to explain or unify everything that happened during a period under a single premise—i.e. the more a theory or thesis tries to explain, the more interesting and important, but the weaker is usually is as a theory and therefore the less it explains (call it a Heisenberg principle of historiography)—this one is on to something, at least as description.  The question(s), I suppose, is the degree to which the events of this century, overlapping or sequential in both geography and time, are interconnected or emerge from common causes or if they were a convergence of factors both related and discrete, or rather is the century a crisis, a sum of crises, or both?  Correlation famously does not establish causation.  To those who see human history in the broadest of terms—in terms of of the environment, of humankind as a singular prong of biology, and therefore of human history as an endlessly interesting and increasingly tragic chapter of natural history—this book will be of special interest.

In college I was ambivalent about the 17th century.  More than most centuries, it was an “in between times” period, neither one thing, nor the other.  All periods are artificial and intermediary, but the 17th century seemed especially artificial given the fundamental advances and shifts in intellectual history that occurred in the Europe between 1601 and 1700.  In the West, the 18th century seemed like a coherent, unified world, the Newtonian paradigm.  But the 17th century was a demarcation, a caldron from which the world of the Enlightenment and the Age of Reason emerged.  The 18th century was the sum and creation of the previous century, a world unified under Bacon, Locke, Descartes, Spinoza, Leibniz, Locke, Newton, and many others, and today I find the earlier period to be the more interesting of the two.  This book only feeds this belief.

As someone who thinks that one of the most important and productive uses of history is to inform policy and politics, it is apparent that the author intends this book to be topical—a wide-angle yet detailed survey of another time, for our time.  In general the 17th century is good tonic for those who believe that history is all sunshine and light or that human progress (such as it is) is all a rising road.  It is also serves as cautionary example for what may be coming in our own time, and a reminder that humanity is a subset of the planet and its physical systems.  A magnum opus of breathtaking breadth and ambition, this book is certainly worth looking at (don’t be put off by its thickness, you can pick it up at any point and read a chapter here or there).

Fat Man and Little Boy

I wrote this for the 70th Anniversary of the atomic bombings of Japan.  It appeared in an anthology at Georgetown University.  This is taken from a late draft, but the editing is still a bit rough.

Roads Taken and not Taken: Thoughts on “Little Boy” and “Fat Man” Plus-70

By Michael F. Duggan

We knew the world would not be the same.  A few people laughed, a few people cried. Most people were silent.  I remembered the line from the Hindu scripture, the Bhagavad Gita… “I am become Death, the destroyer of worlds.”

-Robert Oppenheimer

When I was in graduate school, I came to characterize perspectives on the decision to drop the atomic bombs on Japan into three categories.

The first was the “Veterans Argument”—that the dropping of the bombs was an affirmative good.  As this name implies, it was a position embraced by some World War Two veterans and others who had lived through the war years and seems to have been based on lingering sensibilities of the period.  It was also based on the view the rapid end of the war had saved many lives—including their own, in many cases—and that victory had ended an aggressive and pernicious regime.  It also seemed tinged with an unapologetic sense of vengeance and righteousness cloaked as simple justice.  They had attacked us, after all—Remember Pearl Harbor, the great sneak attack?  More positively, supporters of this position would sometimes cite the fact of Japan’s subsequent success as a kind of moral justification for dropping the bombs.

Although some of the implications of this perspective cannot be discounted, I tended to reject it; no matter what one thinks of Imperial Japan, the killing of more than 150 thousand civilians can never be an intrinsic good.  Besides there is something suspect about the moral justification of horrible deeds by citing all of the good that came after it, even if true.1

I had begun my doctorate in history a couple of years after the fiftieth anniversary of the dropping of the Hiroshima and Nagasaki bombs, and by then there had been a wave of “revisionist” history condemning the bombings as intrinsically bad, as inhumane, and unnecessary—as “technological band aides” to end a hard and bitter conflict.  The argument was that by the summer of 1945, Japan was on the ropes—finished—and would have capitulated within days or weeks even without the bombs.  Although I had friends who subscribed to this position, I thought that it was unrealistic in that it interjected idealistic sensibilities and considerations that seemed unhistorical to the period and the “felt necessities of the times.”  Was it realistic to interject 1990s moral observations onto people a half-century earlier in the midst of the most destructive war in human history?

This view is was also associated with a well-publicized incident of vandalism against the actual Enola Gay at a Smithsonian exhibit that ignited a controversy that forced the museum to change its interpretive text to tepid factual neutrality.

And then there was a kind of middle-way argument—a watered-down version of the first—asserting that the dropping of the bombs—although not intrinsically good—was the best of possible options.  The other primary option was a two-phased air-sea-land invasion of main islands of Japan: Operation Olympic scheduled to begin on November 1, 1945, and Operation Coronet, scheduled for early March 1946 (the two operations were subsumed under the name Operation Downfall).  I knew people whose fathers and grandfathers were still living who had been in WWII, and who believed with good reason that they would have been killed fighting in Japan.  It was argued that the American casualties for the war—approximately 294,000 combat deaths—would have been multiplied two or three fold if we had invaded, to say nothing about the additional millions of Japanese civilians that would have likely died resisting.  The Okinawa campaign of April-June 1945, the viciousness and intensity of the combat there and appalling casualties of both sides were regarded as a kind of microcosm, a prequel of what an invasion of Japan would be like.2

The idea behind this perspective was one of realism, that in a modern total war against a fanatical enemy, one took off the gloves in order to end it as soon as possible.  General Curtis LeMay asserts that it was the moral responsibility of all involved to end the war as soon as possible, and if the bombs ended it by a single day, then using them was worth the cost.3  One also heard statements like “what would have happened to an American president who had a tool that could have ended the war, but chose not to use it, and by doing so doubled our casualties for the war?”  It was simple, if ghastly, math: the bombs would cost less in terms of human life than an invasion.  With an instinct toward the moderate and sensible middle, this was the line I took.

In graduate school, I devoured biographies and histories of the Wise Men of the World War Two/Cold War era foreign policy establishment—Bohlen, Harriman, Hopkins, Lovett, Marshall, McCloy, Stimson, and of course, George Kennan.  When I read Kai Bird’s biography, Chairman, John McCloy and the Making of the American Foreign Policy Establishment, I was surprised by some of the back stories and wrangling of the policy makers and the decisions behind the dropping of the bombs.4  It also came as a surprise that John McCloy (among others), had in fact vigorously opposed the dropping of the atomic bombs, perhaps with very good reason.

Assistant Secretary of War John McCloy was nobody’s idea of a dove or a pushover.  Along with his legendary policy successes during and after WWII, he was controversial for ordering the internment of Japanese Americans and for not bombing the death camps in occupied Europe, because doing so would divert resource from the war effort and victory.  He was also the American High Commissioner of occupied Germany after the war and had kept fairly prominent Nazis in their jobs and kept out of prison German industrialists who had played ball with the Nazi regime. Notably, in the1960s, he was one of the only people on record who flatly stood up to President Lyndon Johnson after getting the strong-armed “Johnson treatment” and was not ruined by it.  And yet this tough-guy hawk was dovish on the issue of dropping the atomic bombs.

The story goes like this: In April and May, 1945, there were indications that the Japanese were seeking a settled end to the war via diplomatic channels in Switzerland and through communications with the Soviets—something that was corroborated by U.S. intelligence.5 Armed with this knowledge, McCloy approached his boss, Secretary of War, and arguably father of the modern U.S. foreign policy establishment, “Colonel” Henry L. Stimson.  McCloy told Stimson that the new and more moderate Japanese Prime Minister, Kantaro Suzuki, and his cabinet, were looking for a face-saving way to end the war.  The United States was demanding an unconditional surrender, and Suzuki indicated that if this language was modified, and the Emperor was allowed to remain as a figurehead under a constitutional democracy, Japan would surrender.

Among American officials, the debates on options for ending the war included many of the prominent players, policy makers and military men like General George C. Marshall, Admiral Leahy and the Chiefs of Staff, former American ambassador to Japan, Joseph Grew, Robert Oppenheimer (the principle creator of the bomb), and his Scientific Advisory Panel to name but a few.  It also included President Harry Truman.  Among the options discussed was whether or not to give the Japanese “fair warning” and if the yet untested bomb should be demonstrated in plain view of the enemy.  There were also considerations of deterring the Soviet, who had agreed at Yalta to enter the war against Japan, from additional East Asian territorial ambitions.  Although it was apparent to Grew and McCloy, that Japan was looking for a way out, therefore making an invasion unnecessary, the general assumption was that if atomic bombs were functional, they should be used without warning.

This was the recommendation of the Interim Committee, that included soon-to-be Secretary of State, James Byrnes, and which was presented to Truman by Stimson on June 6.6  McCloy disagreed with these recommendations and cornered Stimson in his own house on June 17th.  Truman would be meeting with the Chiefs of Staff the following day on the question of invasion, and McCloy implored Stimson to make the case that the end of the war was days or weeks away and that an invasion would be unnecessary.  If the United States merely modified the language of unconditional surrender and allowed for the Emperor to remain, the Japanese would surrender under de facto unconditional conditions.  If the Japanese did not capitulate after the changes were made and fair warning was given, the option for dropping the bombs would still be available.  “We should have our heads examined if we don’t consider a political solution,” McCloy said.  As it turned out, he would accompany Stimson to the meeting with Truman and the Chiefs.

Bird notes that the meeting with Truman and the Chiefs was dominated by Marshall and focused almost exclusively on military considerations.7  As Bird writes “[e]ven Stimson seemed resigned now to the invasion plans, despite the concession he had made the previous evening to McCloy’s views.  The most he could muster was a vague comment on the possible existence of a peace faction among the Japanese populace.”  The meeting was breaking up when Truman said “No one is leaving this meeting without committing himself.  McCloy, you haven’t said anything.  What is your view?” McCloy shot a quick glance to Stimson who said to him, “[s]ay what you feel about it.”  McCloy had the opening he needed.8

McCloy essentially repeated the argument he had made to Stimson the night before.  He also noted that a negotiated peace with Japan would preclude the need for Soviet assistance, therefore depriving them of any excuse of an East Asian land grab.   He also committed a faux pas by actually mentioning the bomb by name and suggesting that it be demonstrated to the Japanese.  Truman responded favorably, saying “That’s exactly what I’ve been wanting to explore… You go down to Jimmy Byrnes and talk to him about it.”9  As Bird points out,

[b]y speaking the unspoken, McCloy had dramatically altered the terms of the debate.  Now it was no longer a question of invasion.  What had been a dormant but implicit option now became explicit.  The soon-to-be tested bomb would end the war, with or without warning.  And the war might end before the bomb was ready.” but increasingly the dominant point of view was that the idea of an invasion had been scrapped and in the absence of a Japanese surrender, the bombs would be dropped.10

After another meeting with what was called the Committee of Three, most of the main players agreed “that a modest change in the terms of surrender terms might soon end the war” and that “Japan [would be] susceptible to reason.”11  Stimson put McCloy to work at changing the terms of surrender, specifically the language of Paragraph 12 that referenced the terms that the Japanese had found unacceptable.  McCloy did not mention the atomic bomb by name.  But by now however, Truman was gravitating toward Byrnes’s position of using the bombs.

After meeting with the president on July 3, Stimson and McCloy “solicited a reluctant invitation” to attend the Potsdam Conference, but instead of traveling with the President’s entourage aboard the USS Augusta, they secured their own travel arrangements to Germany.  Newly sworn-in Secretary of State, James Byrnes, would sail with the president and was a part of his onboard poker group.12  The rest, as they say, is history.

At Potsdam, Truman was told by the Soviets that Japan was once again sending out feelers for a political resolution. Truman told Stalin to stall them for time, while reasserting the demand for unconditional surrender in a speech where he buried the existence of the bombs in language so vague, that it is likely that the Japanese leaders did not pick up on the implications.13  Japan backed away.  Truman’s actions seem to suggest that, under Byrnes’s influence (and perhaps independent of it), he had made his mind to drop the bombs and wanted to sabotage any possibly of a political settlement.  As Bird notes, “Byrnes and Truman were isolated in their position; they were rejecting a plan to end the war that had been endorsed by virtually all of their advisors.”14  Byrnes’s position had been adopted by the president over the political option of McCloy.  As Truman sailed for home on August 6, 1945, he received word that the uranium bomb nicknamed “Little Boy” had been dropped on Hiroshima with the message “Big bomb dropped on Hiroshima August 5 at 7:15 P.M. Washington time.  First reports indicate complete success which was even more conspicuous than earlier test.” Truman characterized the attack as “The greatest thing in history.”15  Three days later the plutonium bomb “Fat Man” fell on Nagasaki.  The Soviets entered the fighting against Japan on August 8.  The war was over.

Given Byrnes’s reputation as a political operative of rigid temperament and often questionable judgment, one can only wonder if the dropping of the bombs was purely gratuitous.  Did he and he president believe that the American people wanted and deserved their pound of flesh almost four years after Pear Harbor and some of the hardest combat ever fought by U.S. servicemen?16  Of course there were also the inevitable questions of “what would Roosevelt have done?”

With events safely fixed in the past, historians tend to dislike messy and problematic counterfactuals, and one can only wonder if McCloy’s plan for a negotiated peace would have worked.  One of the most constructive uses of history is to inform present-day policy decisions through the examination of what has worked and what has not worked in the past, and why.  Even so the vexing—haunting—queries about the necessity of dropping the atomic bombs remain as open questions.  The possibility for a political resolution to the war seems at the very least to have been plausible.  The Japanese probably would have surrendered by November, perhaps considerably earlier, as the result of negotiations, but there is no way to tell for certain.17  As it was, in August 1945, Truman decided to allow the Emperor to stay on anyway, and our generous reconstruction policies turned Japan (and Germany) into a miracle of representative liberal democracy and enlightened capitalism.

Even if moderate elements in the Japanese government had been able to arrange an effective surrender, there is no telling whether the Japanese military, and especially the army, would have gone along with it; as it was—and after two atomic bombs had leveled two entire cities—some members of the Japanese army still preferred self-destruction over capitulation, and a few even attempted a coup against the Emperor to preempt his surrender speech to the Japanese People.

This much is certain: our enemies in the most costly war in human history have now been close allies for seven decades (as the old joke that goes, if the United States had lost WWII, we would now be driving Japanese and German cars).  Likewise our Cold War enemy, the Russians, in spite of much Western tampering within their sphere of influence, now pose no real threat to us.  But the bomb remains.

Knowledge may be lost, but an idea cannot be un-invented; as soon as a human being put arrow to bow, the world was forever changed.  The bomb remains.  It remains in great numbers in at least nine nations and counting, in vastly more powerful forms (the hydrogen bomb) with vastly more sophisticated means of delivery.  It is impossible to say whether the development and use of the atomic bomb was and is categorically bad, but it remains for us a permanent Sword of Damocles and the nuclear “secret” is the knowledge of Prometheus.  It is now a fairly old technology, the same vintage as a ’46 Buick.

The bombings of Hiroshima and Nagasaki broke the ice about the use of these weapons in combat and will forever live as a precedent for anyone else who may use it.  The United States is frequently judgmental of the actions and motives of other nations, and yet the U.S. and the U.S. alone is the only nation to have used nuclear weapons in war.  As with so many people in 1945 and ever since, Stimson and Oppenheimer both recognized the atomic bomb had changed everything.  More than any temporal regime, living or dead, it and its progeny remain a permanent enemy of mankind.

Notes

  1. For a discussion of the moral justification in regard to dropping the atomic bombs, see John Gray, Black Mass, New York: Farrar, Strauss and Giroux, 2007, pp 190-191.
  2. For an account of the fighting on Okinawa, see Eugene Sledge, With the Old Breed, New York: Random House, 1981.
  3. LeMay expresses this sentiment in an interview he gave for the 1973 documentary series, The World at War.
  4. Generally Chapter 12, “Hiroshima”. Kai Bird, Chairman, John J. McCloy and the Making of the American Establishment, New York: Simon and Schuster, 1992, pp. 240-268.
  5. Bird, p. 242.
  6. Bird, p. 244.
  7. Bird, p. 245.
  8. Bird, p. 245.
  9. Bird, p. 246.
  10. Bird, p. 250.
  11. Bird, pp. 247-248.
  12. Bird, p. 249-250. Averell Harriman and Elie Abel, Special Envoy to Churchill and Stalin, 1941-1946, New York: Random House, 1975, 493.Bird, p. 251.  It should be noted that most of the top American military commanders opposed dropping the atomic bombs on Japan. As Daniel Ellsberg observes: “The judgment that the bomb had not been necessary for victory—without invasion—was later expressed by Generals Eisenhower, MacArthur, and Arnold, as well as Admirals Leahy, King, Nimitz, and Halsey. (Eisenhower and Halsey also shared Leahy’s view that it was morally reprehensible.)  In other words, seven out of eight officers of five star rank in the U.S. Armed Forces in 1945 believed that the bomb was not necessary to avert invasion (that is, all but General Marshall, Chief of Staff of the Army, who alone believed that an invasion might have been necessary.’ [Emphasis added by Ellsberg].  See Daniel Ellsberg, The Doomsday Machine, New York: Bloomsbury, 2017, pp262-263.                            As it happened, Eisenhower was having dinner with Stimson when the Secretary of War received the cable saying that the Hiroshima bomb had been dropped and that it had been successful.  “Stimson asked the General his opinion and Eisenhower replied that he was against it on two counts.  First, the Japanese were ready to surrender and it wasn’t necessary to hit them with that awful thing.  Second, I hate to see our country be the first to use such a weapon.  Well… the old gentleman got furious.  I can see how he would.  After all, it had been his responsibility to push for all of the expenditures to develop the bomb, which of course he had the right to do, and was right to do.” See John Newhouse War and Peace in the Nuclear Age, New York: Alfred A. Knopf, 1989, p. 47.  Newhouse also points out that there were numerous political and budgetary considerations related to the opinions of the various players involved in developing and dropping the bombs.  One can only hope that budgetary responsibility/culpability did not (or does not) drive events.
  13. Harriman, p. 293.
  14. For his own published account of this period, see James F. Byrnes, Speaking Frankly, New York: Harper Brothers & Company, 1947.
  15. See Robert Dallek, The Lost Peace, New York: Harper Collins, 2010, p. 128. Dallek makes hit point, basing it on the Strategic Bombing Survey, as well as the reports of Truman’s own special envoy to Japan after the war in October 1945.