The Neoliberal Pandemic

By Michael F. Duggan

We cannot go on living like this.  The little crash of 2008 was a reminder that unregulated capitalism is its own worst enemy: sooner or later it must fall prey to its own excesses and turn again to the state for rescue.  But if we do no more than pick up the pieces and carry on as before, we can look forward for greater upheavals in the years to come.

            -Tony Judy, Ill Fares the Land           

How do you like economic globalization now?

The last time a worldview was so thoroughly exploded was probably when the Aristotelian model of the solar system met the telescope.  The neoliberal status quo is not responsible for the COVID-19 virus, but it is a primary enabler of the ineffective responses to it and for the severity of the economic crises that will follow.

With the contagion having fanned out along trade and transportation routes to become a pandemic—a global epidemic—the failure of much of the globalized West to respond to it quickly and effectively is virtually a metaphor for itself: free trade policies also make it easier for viruses conduct their international business, and harder for nations to respond to it.  What we are witnessing is not only the fact that neoliberal globalization is undesirable in terms of long term economics, but that it is literally bad for the health of the world.  Now that it is self-evident that economic globalization makes nations more vulnerable to rapidly-developing crises, the question is whether the powers that be in government, big business, and Wall Street will abandon this bankrupt worldview.

The current visitation is obviously not the first pandemic.  The Black Death also traveled along trade routes and made it to the Mediterranean from the Black Sea and then into Europe through Italian ports.1  Variola major—smallpox—came to the New World with the Europeans probably in 1519-1520 (Caribbean islands slightly earlier) at the beginning the first phase of modern global economic interconnectedness.  With little or no immunity to the illness, as many as 20 million Native Americans died from it in Mexico alone.2  The H1N1 Spanish Flu emerged and spread during a global conflict and was particularly devastating onboard crowded U.S. troopships.  The H2N2 and H3N2 flu strains of 1956-58 and 1968 were the first pandemic pathogens of the jet age.  But the present scourge, which spread so quickly because of ease of travel, permeable boarders, and slow initial responses, now threatens the world economy because of frozen global supply lines and a dearth of vital resources, much of whose production was off-shored over the previous quarter-century to low cost production zones.

The crisis is therefore a “perfect storm” made possible by the trade winds of neoliberal economic meteorology—a tempest not only of disease and economics, but also of economic recovery.  Consider the sequence of events:

  1. A rapidly-spreading disease arrives at a nation that is not prepared to cope with it. 
  2. In order to effectively respond to the disease, the nation needs massive amounts of medical equipment, much of which is produced in other nations with outbreaks of their own and is now in short supply.
  3. The disease spreads rapidly, and its highly contagious nature prevents a significant portion of the domestic workforce from working.
  4. If the situation persists, it could lead to a Great Depression-like economic crisis that destroys U.S. global economic predominance, undermines the Dollar as the world reserve currency, and puts the national economy in a steep and permanent decline.   
  5. If people return to work prematurely, both the virus and the proximate cause of the economic crisis (i.e. people not working) could return and the overall crisis may persist.
  6. Because the U.S. has come to rely on foreign-produced durable goods over the past 30-50 years—to say nothing of important medical equipment and pharmaceuticals—it also relies on long oversea trade routs and remote foreign supply networks.  The pandemic has shut down many of these complex arteries and webs and nobody knows how long it will take to start them up again.  If it takes too long, the United States could experience inflation, perhaps hyperinflation by the end of the year.  If this happens, the United States could experience an economic collapse that would make the Great Depression look like a hiccup.

Those of us who have followed the progress of an increasingly globalized world have long noticed the resulting imbalances and disparities both at home and abroad.  Even a quarter-century ago it was apparent to many of us that this was an unsustainable ideology, an equal and opposite utopian program to the artificial economies of Marxist-Leninism.  The United States took the wrong lessons from the end of the Cold War.  Rather than seeing the dangers and pitfalls of rigid ideology, its leaders simply embraced their own orthodoxy. They did not—and might still not—realize that the idea of a completely unregulated economy is just as an extreme of an outlook as one advocating a completely managed one and just as problematic.

Then came the economic crisis of 2008, and, as the late Tony Judt observed, “[t]o avert national bankruptcies and wholesale banking collapse, governments and central bankers have performed remarkable policy reversals, liberally disbursing public money in pursuit of economic stability… A striking number of free market economists, worshippers at the feet of Milton Friedman and his Chicago colleagues, have lined up to don sackcloth and ashes and swear allegiance to the memory of John Maynard Keynes.”  Alas, he concludes “the reversion to Keynesian economics [was] but a tactical retreat.”3 

But if the 2008 shot-across-the-bow and the corporate socialism that followed did not jar the acolytes of neoliberalism out of their fool’s paradise, what will?  Perhaps dark imaginings of themselves or a loved one dying alone in a chaotic, underequipped hospital ward—drowning in their own lungs over a period of a week or two—will open their eyes.  On a side note, the term “Great Recession” a misnomer for a structural depression that still lingers throughout much of the country.

With a $2 trillion stimulus package in the mail and more on the way, everybody seems to have become a stopgap Keynesian or New Dealer again.  But will it last?  The danger is that the lockstep ideology of Wall Street and the Washington Consensus is so ingrained that the true believers of the establishment will not—cannot—change their ways.  Metaphors and analogies for the situation abound and it is difficult not to mix them.  Like a smoker who goes back to his cigarettes after being told that he has emphysema, will policymakers and advisors not learn from a global catastrophe?  With the egos of high-level careerists and credentialists so deeply invested in the post-Cold War mindset, can they admit error without a neurotic split (and will they risk a public loss of face even if they can admit it to themselves?)?  Like politburo toadies afraid to be the first to stop clapping after an address by Stalin, will any public figure break ranks from the economic religion of the past 30 years to admit error?  Rationalization and denial are the twin pillars of human psychology—dealing with cognitive dissonance is our lot as a creature—and it is yet to be seen whether or not the United States will succumb to a terminal case of The Emperor’s New Clothes.  

When we return to “normalcy” the question will be whether the United States will continue to be hogtied to an economy based on efficiency, multilateral trade agreements, comparative advantages, low cost production zones, corporate internationalism, and an overreliance on Big Finance, or if will it adopt a more sensible and nuanced outlook based on national interests and situational dictates.  Economic realism is simple in its central tenet: countries free trade when it is in their interest to do so and protect when it is in their interest to do so, and each situation must be judged case by case.  Free trade between nations with large GNPs often makes sense, but free trade between rich nations and poor nations results in quasi-imperialistic relationships in which one nation gets fat, lazy, and uncompetitive as it exploits the other for cheap labor, raw products, and low cost production venues with a convenient lack of labor and environmental laws.  As for multilateral trade agreements, suffice it to say that one size does not fit all.  Each trade relationship should be negotiated with an eye to the individualized needs and interests of the parties entering into it.  Above all, a nation should hire its own people at a living wage in order to produce the durable goods it uses and the vital supplies it may someday need.  The N95 respirator and ICU ventilators might serve as examples of the kind of medical equipment that the United States should consider producing entirely at home.

Karl Popper famously observed that “all life is problem solving” and that we learn by abandoning mistaken beliefs when they are demonstrated to be untrue.  To not learn and adapt because of an ideological adherence to a discredited position is not only irrational, but risks a kind of self-inflicted natural selection.  When a national puts ideological considerations above realistic considerations of vital interests, it puts its long term survival in jeopardy.  This is what happened in the decades leading up to the pandemic. 

If the country does respond effectively to the economic crisis that will follow the pandemic, could there be an up side to all of this?  Yes, but only if we are able to apply the lessons learned to other emerging crises.  Some of us who write about climate change and related issues speak of a “Pearl Harbor of the Environment” as a motivation to spur people and governments to an effective global response the way that the attack on Pearl Harbor threw the U.S. war mobilization into overdrive.  The pandemic is a comparatively minor environmental Pearl Harbor, and yet it is a world-historical event. As a friend of mine observed, how the United States and the West react to the economic aftermath of the pandemic will likely be a major turning point in history. Will we go back to policies that have worked, or those that enabled and exacerbated the present crisis?

The pandemic is what some people call a “Black Swan” or a high-impact, low-probability event.5  Those of us who have been predicting a pandemic since the early 1990s do not see it as a “low-probability” phenomenon, much less the last global epidemic.  Human beings are walking, traveling, socially-interacting Petri dishes, and there are now about 7.8 billion of us.  Perhaps climate skeptics will finally realize what nature can dish out and that humans are by far a junior partner on this planet, a small if overpopulated subset, a global plague in ourselves.  The crises of the environment are not low-probability speculations either, but virtual certainties and their impact will dwarf that of the Corona virus.  

Now that everybody appears to believe in science again, perhaps we can prepare for the much greater crises on the near horizon: the crisis of atmospheric carbon and climate change, the shocking loss of habitat and biodiversity, and the overarching problem of human overpopulation (to say nothing of the plastics crisis, soil depletion, and water issues).  Regardless of whether or not you subscribe to the Gaia hypothesis, as far as pushback from nature goes, we got lucky this time.  If the recovery is just about reopening without changing how we do things, the period of the lockdown will have been lost time, a lost opportunity. With no disrespect intended to the tens of thousands of people who have died and will die from this terrible disease and the millions who have been infected and/or put out of work by it, relative to what is coming, the COVID-19 pandemic might one day look like a bad cold.

Notes

  1. See J.M.W. Bean, “The Black Death: the Crisis and Its Social and Economic Consequences,” in The Black Death, the Impact of the Fourteenth-Century Plague (Binghampton, NY: Center for Medieval and Renaissance Studies, 1982), 25. 
  2. Jared Diamond, Guns, Germs, and Steel (New York: W. W. Norton & Company, 1997), 310.  Regarding the smallpox epidemic of 1775-1782, see Elizabeth A. Fenn, Pox Americana (New York: Farrar, Strauss and Giroux, 2001. 
  3. Tony Judt, Ill Fares the Land, (New York: Penguin, 2010), 7.
  4. See generally, Nassim Nicholas Taleb, The Black Swan (New York: Random House: 2007).  

A Purdy Commonwealth

Book Review (Unedited)

By Michael F. Duggan

Jedediah Purdy, This Land is Our Land, The Struggle for a New Commonwealth (Princeton, 2019), 164 pages.  $19.95

…when all the time life’s inseparable conditions allow only clumsy opportunities for amelioration by plodding compromises and contrivances.
-Thomas Hardy

It has become a tradition with us.  In December we go up to Rhinebeck for the Sinterklaas festival—an amalgam of Old Dutch, upscale small town America, funky Upstate elements, and any number of cultural traditions of the season.  And then, among the festivities, I duck into Oblong Books and Music to buy another book I don’t need for my burgeoning collection, or a CD (I have a similar summer tradition with The Island Bookstore near the Currituck Beach Lighthouse on the Outer Banks).  For 2018 the book was Andrew Bacevich’s Twilight of the American Century.  Before that was a collection of Charlie Christian recordings.  My choice for December 2019 was Jedediah Purdy’s slender volume, This Land is Our Land, which had the fortune, or misfortune, to be issued during the same season as David J. Silverman’s This Land is Their Land, whose title gives away the moral of a story that, when told accurately, lays waste to the cherished American myth of Thanksgiving and related historical fictions.

Over the course of Western political thought, we have had Plato’s Republic, Moore’s Utopia, Hobbes’s Royalist Leviathan and the do-it-yourself Renaissance principality of Machiavelli.  We have seen the gradualist conservatism of Hume, Burke, Hamilton, and Viereck, and Locke’s individualistic (and legalistic) libertarian Eden echoed by Jefferson and reflected in the Bill of Rights. In the next century, Marx proffers the worker’s utopia. The followers of Locke and Marx assumed that reason, cooperation, and benevolence can dominate as human traits (either while pushing Native Americans off of their land or executing class enemies and their families).  We have witnessed the great twentieth-century excrescences, the tribalistic non-identical twins of Marxist-Leninism and National Socialism.  With a vision drawing on Locke and a temperament reminiscent of the Loner of Walden, the present book presents the egalitarian commonwealth of Jedediah Purdy.

As one would expect, it is well written.  Purdy constantly surprises with just how well he expresses and unpacks his well thought out ideas.  The humbling feeling I get when reading him is like the one I got the first time I read Annie Dillard’s Pilgrim at Tinker Creek thirty-odd years ago, but then one of the most important lessons I have learned about the life of the mind is to not be intimidated or otherwise distracted by impressive form. It is the ideas that matter and one must evaluate by restating them in one’s own voice, no matter how homely.  Thus it is possible to love Purdy’s writing without always liking it.

Professor Purdy is one of those hard cases who sees the world for what it is and still chooses idealism without apology.  In this sense he resembles Gandhi and the early Marx.  Although he has described himself elsewhere as a “political pessimist,” he is not a happy pessimist who has made his peace with an imperfect world like Hardy, Holmes, or Hume.  Nor is he a partner at the Hobbesian firm of Nasty, Brutish, and Short.  Like Twain he sees the underlying motivations of frauds and shams, but has not been beaten down by the world’s dark truths. Rather, he sees the world as starkly as any realist and then for bearing, acquires the unfulfilled ideals of the Enlightenment in his sights.  He is a disillusioned interpreter of the past but a temperamental optimist about human potential.  He puts forth ideas that are even more egalitarian than ones that have by his own account failed in the real world.

His mind is curious—distinctive—and his arguments are powerful in a moral rationalist way. He strikes one as decent, earnest, high-minded, and rational.  If he was younger one would (and some did) call him precocious, but with depth.  He is the real thing: a serious, often brilliant scholar.  He is confident in his abilities and in the correctness of his vision, and it is hard to say whether or not his optimism is hardwired, his moral rationalism dogmatic (and therefore beyond rational discussion), and, without psychologizing the author, I am curious to know what one would have to demonstrate in order to prove to him that a position of his is implausible. Taken to extremes, positions of moral rationalism are as unreasonable at any other perspective.

The book, described by its author and dust jacketeers as a meditation, a “Thoreauvian call,” and a history, works up to the concept of commonwealth after introducing it up front.  He develops the idea, or rather the need for it, through a Preface and five essays or loose-fitting chapters laying the groundwork, and terminating with a “Forward” (presumably a call to advance or else a prologue for either a future book spelling out the details of how to implement his plan, or else whatever is coming in the unfolding environmental catastrophe and the human response to it).  In order, these are “Homeland,” “This Land is Our Land,” “Reckonings,” “Losing a Country,” “The Long Environmental Justice Movement,” and “The Value of Life.”  The perspective, like the problems the book diagnosis is closely tied to the land.

Purdy defines his idea of commonwealth in his Preface, “Homeland.”  A lawyer and a wordsmith, he begins with the etymological roots of the word (in a couple of places he introduces fundamental concepts by examining their Latin, French, or Middle English origins): early usages both common and elite, and the definition provided by the great double-edge sword of liberal political philosophy, John Locke.  He then generalizes, distills, and defines commonwealth by what it is and does: a social/economic arrangement allowing for “the well-being of the whole community—the flourishing that is shared and open to all” (p. xii), and by what he believes it could be: “an economy where no one gets their living by degrading someone else, nor by degrading the health of the land or the larger living world.  In such a community, the flourishing of everyone and everything would sustain the flourishing of each person.  This would be a way of living in deep reciprocity as well as deep equality” (p. xiii). Deep indeed.

Here the author speaks of a world that I do not know other than from the views of early communists, levelers, nineteenth-century anarchists, and utopians unified by a record of failure.  It is also a perspective akin to what one hears on the Millennial left and is reminiscent of the economic egalitarianism of Rutger Bregman.  At best these ideas may embody an idealistic part of a larger equation for successful government, like those of seventeenth-century thinkers such as James Harrington and English Oppositionists like Charles Davenant, John Trenchard, Thomas Gordon, James Burgh, and Henry St. John First Viscount Bolingbroke (see Forrest McDonald, The Presidency of Jefferson, 19-21, 161-162, 171).  His writing rings somewhat of the liberal rationalism of John Rawls, who he criticizes at the beginning of his Forward/last essay. Like a Holmesian legal positivist or an existentialist, he denies Rawls’s idea of a metaphysical theory of meaning for the world and those of us in it.  Thus Utopia is to be the fulfillment of rationalist will and effort and not a deterministic unfolding of historicist laws.

He writes “The freedom of that community [the commonwealth] would not be freedom from the consequence of your actions,” without elaboration. In the commonwealth, equality would seem to edge-out freedom in all of its manifestations both good and bad.  But how are we to enforce such equality in order to secure good results? And what about those who exercise their freedom toward bad ends? He doesn’t say how organized crime, black markets, drug cartels, tribalism, psychopathy, other forms of criminal insanity and anti-social behavior—human venality in general—or simple (or cynical) nonconformity would fit into such a scheme of benevolent reciprocity or how the commonwealth would respond to them. These things are not just inconvenient details to be swept up or shrugged off in light of an otherwise perfect blueprint. They are permanent features of the human moral landscape and extrapolations of significant features of our animal nature.  

I realize that in order to save the planet, humans will have to completely reorient our relationship with nature and in doing so, reconfigure our relationships with one another.  I also understand many of the steps that got us to where we are.  I cannot fault someone for not coming up with the right formula in a race where nobody has a produced workable one.  Perhaps the world cannot be saved within the confines of the existing economic, legal, and social institutions.  But as with any proposal, I am just as curious about the first practical steps of a general outline toward a solution as I am with what the prettiest imaginings of what the final arrangement might look like.

As if to preempt the objections of skeptics like me, he provides a sprinkling of historical instances from the American tradition that foreshadow elements of his vision in the public utterances and deeds of Lincoln, Lyndon Johnson, and Martin Luther King, Jr.  Notably missing is Franklin Roosevelt’s Economic Bill of Rights from his 1944 State of the Union Address.  Purdy then rightfully observes that “The American commonwealth has been blocked again and again by division and exploitation.”

I think that Purdy and people like me ultimately want the same thing: a just and sustainable social order within a just and sustainable world order, or the closest workable thing to it.  The difference is in the way we see history, the nature of power, and human nature, and therefore the basis of possible solutions.  The concepts of chaos in physics and baseline manipulation in advocacy and adjudication tell us that slight variations in initial conditions, premises, and trajectories will take you to very different places.  Purdy and I thus arrive at different conclusions. 

Professor Purdy appears to interpret history in light of a goal: the potential of the highest ideals of the Enlightenment amounting to literal social and economic equality. These objectives may go beyond the actual ideas of the Enlightenment (e.g. literal economic equality in response to an interpretation of capitalism as economic authoritarianism and employment as exploitation).  The result is a vision of egalitarianism taken to an extreme.

By contrast (and rifting off of William James), I see history as a dark and bloody mess (underscore bloody, underscore mess) lit haphazardly by noble ideas and periods of relative enlightenment. I see capitalism as a thing that can be, and has been, regulated toward the public good (e.g. the New Deal paradigm of 1933 to the early 1970s). I see employment as a necessary thing that in some instances can be rewarding, even amounting to a calling or life’s purpose (e.g. the profession of a successful author or law professor), or else something unpleasant we do to barter our time for more meaningful or enjoyable things. I realize that this is not how employment currently works for most of the world’s people.  

Like many of his generation and younger Millennials, his view of “capitalism” suffers from excessive narrowness in interpreting a vague term.  Rather than limit one’s definition of “capitalism” to the collusion of powerful oligarchs, there is another view of it as a mechanism of growth, diversification, and creativity that relies on the dominance of small and medium-sized companies which I believe can be very good.  In this model, people and local and regional economies rely on the market for essential economic relationships that generate wealth and create prosperity. Thus construed, economies are naturalistic phenomena like organisms living in and interacting with their environment (see generally Jane Jacobs, Cities and the Wealth of Nations). Adam Smith himself believed that capitalism becomes poisoned once you allow collusion of the powerful to harm the public good. I digress.

Purdy dreams of things that never were (and not for want of trying) and asks “why not?” where I look at the world and ask “what is reasonably possible based on past experience and a realistic understanding of what people are like and how power operates in the world?”  He seeks to perfect; I seek to accentuate the good given our meager ability to understand it among the cacophony of competing interests and to actualize it and maintain it, however imperfectly.  I embrace efforts intended to curb the worst abuses of our system; he seeks to perfect.  I agree that we can and must do better, but we cannot—and should not—try to perfect human nature.  The perfection of humankind is a dangerous and seductive illusion at the end of a well-trodden path.  I suppose the reply to my criticisms would be that the world cannot survive anything short of an ideal commonwealth and that we have to get it right the first time.  I agree that whatever the solution, it must be the right program done with fewer errors than almost all human enterprises to date. 

As regards human nature—and in spite of the historical record that he understands better than most—Purdy apparently sees people as being essentially good and rational or at least capable of having these qualities predominate in their nature which we can then generalize into a system.  “Deep reciprocity” will not work otherwise. And yet how are we to bring forth and sustain these positive human characteristics?  Proffering a polity based on an assumption of the dominance of the better qualities of our nature is like trying to build a school of clinical psychology on the assumption that people are primarily happy. Both ignore significant aspects of our nature.

By contrast I see people as a mixed bag. We are intrinsically conflicted, off-balance creatures divided between primary considerations of self-orientation driven by pressures of individual selection, and less dominant motivations of altruism driven by group selection (see Edward O. Wilson, The Social Conquest of Earth, 142-47, 156-57, 162-65, 170-88, The Meaning of Human Existence, 22-24). Thus our inborn repertoire of behavior includes the general categories of aggression and competition between individuals and groups as well as cooperation.

Purdy is fully aware that that to date, governing in the real world has had more to do with power and the interests of the powerful than with abstract morality, and that the United States is just another nation whose history is a yin-yang, or rather a thatched weave, of good and bad.  Good acts do not expunge or balance-out bad ones, and how is one to weigh and disentangle these things in a world of shifting gray tones between the Manichean extremes of dark and light?  To what degree does a bold experiment in republican self-rule wipe the slate clean of cultural murder, robbery, cheating, and slavery?  And which side is more truly reflective of what we are as an animal and under what circumstances?  Equally important is the fact that well-intended programs frequently breed disaster just as cynical programs may yield fruit, and we can never be sure if the consequences of our intentions will succeed. One thing is certain: in social and political life, things never turn out entirely as intended. The drag and resistance of the real world of human events and elements of our own internal nature will always thwart idealistic enterprises from working as planned. Utopian projects are nonstarters as intended and tend to morph quickly into monstrosities.

Purdy realizes that “the history of this continent’s past five centuries is woven from fantasy on the one hand and the relentless and often inhumane and destructive extraction of wealth on the other” (p. xv). Ironically both of these are based on, or were justified on, Lockean grounds that cut both ways—they emphasize individual rights while providing moral and legal cover for the killing and displacing of native peoples and the vulgar amassing of capital. 

He ends his Preface with the reassurance that his book “is not a morality tale” (perhaps in a similar way that people asserting that they are not insane are really not insane), that “It is a material story, an accounting of how this familiar tale was made that both illuminates and rebuts the morality tales that have attached to this place.  It is a story about the terms of land making that made American wealth so unequal, uncommon.”  Fair enough, although embedded in this statement is the idea that in order to be meaningful, a material (objective? literally true? accurately reflective of a greater external reality?) story must center around or have proximity to a greater moral point while rebutting false ones (likewise a morality tale must have a tangible relationship to and application in the real world).  It also implies the idea of degrees of truthfulness and falsity of moral narratives.  It is even possible to tell a “material” story shorn of moral implications? How do you tell a neutral story and what would be the point of it? The mere telling of a story means that it is important to the author in some sense. Is this a Hemingway-like exercise in simple description of real trends and events that allows readers to fill in the feelings and morals themselves? In matters where values are at stake, “neutrality” is for cowards, cynics, and psychopaths, and Purdy is no coward or cynic and seems to be about as far from being a psychopath as a person can get.

There is no such thing as morally-neutral interpretations and in history we walk a tightrope between judgment and clemency—the avoiding of unduly presentist chauvinism—of the past, of also trying to understand it in its own terms and values as best as possible.  The liberals of today stand on the shoulders of bold experimenters of the past who are likely to come up short in terms of subsequent developments in moral standards.  So, in spite of a checkered national-historical tradition, and in a time when cynicism, division and mistrust, disparity and exploitation are at levels not seen in almost a century (and in a time when radicals on the right have adopted postmodernist arguments on the relativity of truth), why does he think that our better angels will prevail and that a utopian commonwealth will work when all others have failed?

1. This Land is Our Land

In Chapter One, also the namesake of the book, Purdy attempts to uncover the source of enmity that divides the nation.  He observes that “the things that tie people together and the things that divide them tend to be the same thing,” like the two sides of a coin.  This includes the land itself, whether it is the forever ruined landscapes of the Appalachian coal belts, the agricultural country of eastern North Carolina, the various fracking regions, or the public lands of the West.

Here he is on to something, but I also think that he is emphasizing only one manifestation, one lineage of the rift in the nation that still baffles most of the chattering classes on the mainstream left.  He asks fundamental questions like “How to people come to be one another’s problems, threats, burdens?  How do we become one another’s helpers, protectors, friends?”  While Purdy is too smart to accept uncritical clichés claiming that it was “Russia and racism” that swung an election that should not have been close, he presents a landscape-based explanation rather than one of overarching causes—that the establishment wings of both parties abandoned huge swaths of the electorate and embraced trade multilateralism, comparative advantage, and the off-shoring of jobs. Both have embraced a neoliberal economy based on cheap labor both in distant nations and imported domestically.  Thus the Party of Labor abandoned a primary constituency in favor of identity politics.  In his new book, The Age of Illusions, Andrew Bacevich blames our current predicaments on the failure of post-Cold War policies generally and the entrenched ideologies of the “The End of History” model.

Purdy takes some shots at nationalism (presumably the scourge of extremist ethnic nationalism), characterizing it as a myth “that came into this world dripping blood and soil.”  It is unclear whether or not the moderate embracing of the nation-state falls under this category of myth, although later on he does write that the state would remain the basic unit of the world order (96-101). One can only hope that Purdy is not throwing all forms of nationalism into one basket. Prior to Bismarck, state nationalism was sometimes seen as a progressive alternative to monarchy, and Theodore Roosevelt’s “New Nationalism” of 1910-1912 was the first major platform of social democracy in U.S. History. As Diana Johnstone observes in her new memoir, Circle in the Darkness, “Ho Chi Minh, Amilcar Cabran, Mahatma Gandhi, Simon Bolivar, and Patrick Henry, and a whole array of liberators” were nationalists. She also points out that “within the framework of the nation state, representative democracy was born.” (416). Democracy, to include democratic socialism are phenomena of the state and are not prairie fires of a borderless world.

Purdy notes that “It’s a truism that nativism and nationalism are crises today,” and that “nationalism is bound up in the American landscape.”  Here, too, I wouldn’t put such a fine point on it.  He is certainly right in a proximate sense, but tribalism and bigotry are parts of the human condition and they are always present, even when not conspicuous.  They are not “myths” to be easily rebuffed, but rather manifestations of fundamental human proclivities that must be actively opposed and taught against.  (Edward O. Wilson, The Meaning of Human Existence, 30-31). 

When a nation is doing well economically, these ugly facets remain below the surface; when a nation is prosperous, there are few grievances to pin on others.  The resulting decline in racist incidents during periods of relative prosperity may lull progressives into a false sense of triumph, of permanent social progress, that we are “defeating” what is in fact a dark base element of our nature.  Tribalism and racism reemerge when people are doing poorly and feel a need to bind with their own kind and irrationally look outward for scapegoats for their problems.  This is obviously made worse when people fall prey to propagandists and demagogues who tell them to embrace their ugliest impulses as moral principles.

Bigotry and tribalism are not reinvented whole cloth at particular points in history. Something cannot come out of nothing.  They are not visitations—pandemics—that mysteriously arise out of the corrupted ether.  They always there, dormant, sleeping serpents that reawaken in response to real historical (economic, political, social) conditions.  Sure, nativism and nationalism “may be bound up in American landscapes,” but in this respect whites or Americans in general are by no means unique.  Rather, these are symptoms of much broader problems, and land use is a single (but important) manifestation of these.  They are an indication of just how badly off so many Americans have fared under the globalized economy.  If we do not address the underlying disease, the symptoms will persist, and if we only threat the symptoms, the underlying pathology will persist and spread.

2. Reckonings

Here Purdy develops the idea of land as the basis of division and disparity.  He describes the transformation—destruction—of the environment for shortsighted gain, and how the people living on the ruined land are marginalized and eventually destroyed.  He offers the stark observation that “Power rearranges people on the land.  Those who cannot control the land are controlled by it” and “…economic powerlessness is tied to the incapacity to control your environment.”  As with feudal and agricultural Britain, power is closely tied to property.  But unlike the feudal period, much of today’s use and abuse of the land is primarily extractive and permanent in its transformation.  Unlike the English gentry of a later time or the analogous Junker class in Prussia, the powerful today are remote and no longer tied to the land or the people on it.

In his descriptions and numbers shock the conscience: that an estimated 500 mountains in West Virginia have been destroyed by “mountaintop removal” mining, that adjacent valleys have been filled in to depths of 600 feet (about 45 feet deeper than the height of the Washington Monument), that an estimated 2,000 miles of headwater creeks have been buried, “that mining [has] altered 7 percent of the surface area central Appalachian coalfields,” that “1.4 million acres of native forest [have] been destroyed and are unlikely to recover on the broken soils mining leaves.”  He also discusses the toxicity left behind in addition to the outright physical destruction.  Every American should be required to bear witness to these facts.

Here too my temptation would have been to couch the issue more broadly in terms of neoliberalism (which Purdy has written about elsewhere) and economic globalization, but then a part of his thesis is how the land unites and divides us and “land” and its possession is in the title after all.  Thus he stays more narrowly on point and makes his points with the devastating eloquence of a prosecutor who is certain of his facts and the guilt of the accused.  He does touch on underlying causes, “a tableau of abdication: years of privatization and non-regulation.”  This is putting it mildly: privatization is a part of active agenda that includes a reregulation of the economy in favor of rich special interests (see Barlett and Steele, Betrayal of the American Dream, Chrystia Freeland, Plutocrats, and Tony Judt, Ill Fares the Land).

For most of this essay he writes brilliantly about the abuse of the land and its people—“the disregarded and discarded classes”—by the powerful.  It is no coincidence that the regions he writes about, the ravaged former extractive or industrial areas of West Virginia (his home state), Pennsylvania, the fracking regions of Oklahoma and Texas, and the farm country of eastern North Carolina, voted overwhelmingly for the current administration.  On these issues Purdy knows what he is talking about as well as anybody and he is an expert on the relevant law.  Every American, left, right, and what passes for the center these days should read this striking piece and the price of the books is worth the cost for the observations made in this chapter alone.

3. Losing a Nation

In Chapter Three, Purdy describes the depression felt by many Americans after the 2016 election.  He gives an accounting about how most of his life has been a sequence of political disappointments.  He channels Henry David Thoreau in historical parallel to himself (although one infers that Purdy is a better sport than the moralistic Loner of Concord, although one senses a temperamental affinity and an ideational lineage).

In spite of its eloquence, the beginning of this chapter feels like a backsliding from the previous two—feels a little out-of-place in this collection altogether.  After having diagnosed the disparities on the land that led to the result of the 2016 election with power and insight, Purdy lapses into a stance more like that of a conventional depressed Democrat, baffled at how it all could have happened. 

He reaches the epiphany that he has lost his country writing that a “country lost in this fashion may never have been more than a pleasing illusion, a gauze of selective ignorance of indifference. ‘Losing a country’ may be a way of describing coming to see more clearly”(P. 60).  What he is describing is akin to the moment of peripetia (or perepeteia) in Greek tragedy when (according to Philip Roth’s protagonist, Nathan Zuckerman in the The Human Stain) “the hero learns that everything he knows is wrong.”  With such a realization, one’s moral awareness is suddenly thrown into reverse, crisis, and disillusionment.  In this realization, whites are far behind others in the nation who long knew that they never had a country to lose or at least approached pleasant national-historical myths with a healthy measure of caution and skepticism.  Purdy/Thoreau’s historical illustration is the slave, Anthony Burns, who, after fleeing the South was returned to servitude by the Fugitive Slave Law.

I know of the country about which Purdy writes in this chapter—a nation founded on slavery, Indian wars, land grabs based on Lockean moral justificationism, and the continental expansionism of Manifest Destiny.  Later it would be founded on or allow child labor, wage slavery, myths of rugged individualism and Frederick Jackson Turner’s Frontier Thesis, traditional overseas imperialism, neoliberal imperialism—a nation that “began as both a world-historical land grab and a world-historical experiment in republican self-rule.”  I know this story and so does Professor Purdy, and he is weary of trying to come to terms with it.  Good first principles do not erase or balance-out bad acts, not really. 

But where I see—or at least until recently, saw—each new period of history as just another succeeding Manichaean chapter in the human story in which good and evil are inextricably intertwined, Purdy sees the gold to be separated from the dark ore.  The present day may thus be a hopeful demarcation between the bad old past and a truly enlightened future. The hope here is a modification of Emerson: the present is prologue.  Where I see goodness and reason that may be accentuated and darkness that at times can be minimized or temporarily kept in check, Purdy apparently hopes for bad to be eliminated to a substantial degree, and for good to be perfected as a kind of fulfillment of an uber Enlightenment ideal that has never existed on a large scale. 

Thinking, feeling Americans come to powerful disillusionment when they realize that much of the national mythological history we are taught or absorb via cultural osmosis is just that.  The truth is always more complex and a lot messier.  Thus Silverman’s corrective, This Land is Their Land.  Conservatives tend to cling to the myths and rationalize away or shrug off the dark truth as justifiable (or at least understandable) operating costs, the rounding errors of “freedom” and the foibles of an essentially good system.  As a friend of mine used to say, rationalization and denial are the twin pillars of human psychology—never underestimate another person’s capacity for self-delusion, and never underestimate you own.

A nation’s historical morality is not arithmetic and its history is not a balance sheet.  Progressives tend to focus on the first principles of the Founding and Framing and the more sensitive among them may turn bitterly against their flawed nation when they realize that our sins and attributes don’t balance out on a ledger of sums and deficits (from the point of view of most Native Americans, the British position on westward expansion during the 1760s was more enlightened than that of the American patriots).  The hope is that the national moral vector is straight and upward with a minimum of bad outliers. 

But history is not a simple graph indicating a median or mean of a rising moral trajectory, the “upward trend” of FDR’s fourth (and final) inaugural address.  One problem may be in trying to cast history primarily in moral terms, as if social progress was guaranteed like the growth of scientific knowledge and technological progress and in believing that a nation is somehow exceptional—exempt from human nature—because of the values found in its founding documents; a nation is only as decent as its people and their chosen leaders, and social progress is never a given. Progress must be fought for, and, once achieved, it must be defended.  Those on both the right and the left therefore fall prey to respective kinds of exceptionalism: denial in the first instance and excessively narrow focus in the second that sometimes leads to disillusionment and a crisis of faith.

The second of these appears to be what happened to Purdy in 2016.  If you assume that our system and its history are exceptional because our corpus includes the Declaration of Independence, the U.S. Constitution, Letters from an American Farmer, Democracy in America (by a sympathetic outsider), and On Civil Disobedience, you will spend the rest of you life trying to reconcile these with slavery, Jim Crow, the Indian wars, and the Gilded Age(s). 

Conversely, if you assume that humans are aggressive creatures—a plague species—capable of total warfare, genocide, the strategic bombing of civilians, and the destruction of the world environment, but who are also capable of love, kindness, altruism, courage and self-sacrifice, classical music, hot and cool jazz, the works of Shakespeare, the Sistine Chapel, the New Deal, the Marshall Plan, the Peace Corps, and the Voting Rights Act, the world makes a lot more sense than one we construct in our minds based on binary categories of good and evil and an assumption of the eventual triumph of the former.  A realistic view of our ourselves makes more sense, and one will arrive at the conclusion that, although we may and should take moral lessons from the past, no nation is an unswerving paragon of virtue.  And while we should never abandon efforts to make the world a better place, we cannot ignore what people are capable of doing, and will continue to do.  The question then becomes: at what point does the corruption of a system make it intolerable for us?

The world I know not only permits disparities, to some degree it requires them in a similar way that monochrome photographs require black and white and every shade of gray in between in order to produce an image. The issue is how to keep the dark side in check rather than defeating or eliminating an intrinsic part of what we are.  I rolled my eyes when politicians spoke of “defeating” evil in the world after the attacks of September 11th, and I do not know the world to which Professor Purdy aspires.

Like many people younger than me, Purdy writes about “privilege” and now projects it backward on to someone who likely never used the term in the sense that he means it, his nineteenth-century doppelganger, Thoreau.  Thus even Thoreau does not escape whipping because he fails to keep the abolition of slavery in the forefront in his mind at all times.  “Thoreau is complaining about, among other things, losing the privilege of ignoring slavery much of the time while also disapproving of it.” (P. 60).  Here Purdy falls on his sword, admitting that he is more like Thoreau than Anthony Burns in terms of having a country to lose.  Purdy had a country to lose and now he has lost it.

From here he goes into an elegant, if privileged, wallow into solitude and a Thoreau-like return to the moral instructiveness of a “naïve” response to nature—“a kind of second naïveté that one returns to after time away.” (p. 63).  I think that all of us who are attracted to nature know exactly what he is talking about and choose occasional re-emersions in it.  Sometimes we long for or even attempt to return to it with the lost pre-Darwinian wonder of youth or that high-on-life moral superiority one feels after reading Walden for the first time, a particularly inspiring essay by Emerson or a Wordsworth poem. 

In analyzing this part of the essay two opposite thoughts entered my mind.  On the one hand is the idea expressed by Twain scholar, Ron Powers, on Clemens’s last return to Hannibal, his boyhood home: “When you become unsure of who you are now, you go to who you were when you knew who you were and try to read back out of that.” (see Ken Burns’s documentary, Mark Twain)  On the other hand, you can’t repeat the past, Jay Gatsby—and you cannot unlearn what you know.  My sense is that Jed Purdy, even in despondency, knows full well who he is and does not really need to be reminded what his “nonnegotiables” are.  He asserts a third consideration: that this re-emersion in nature “would never be an escape from history and social life into a greenwood idyll.  It would be a way of getting another angle of vision on the same social facts, the same greedy and unequal humanity.” (65)  In a depressed state, some people self-medicate with ice cream or chocolate.  Others choose scotch.  Purdy chooses reorientation via a return to nature.  Who can blame him?  

In a sense, Purdy and Thoreau, indeed all of us so inclined, must choose a naïve response to nature in order to achieve a fresh vantage point on human events.  A more realistic return would be a distraction.  It would have revealed to Thoreau the Formica subintegra—slaver ants—that live in the woods around Walden, and whose “Austerlitz” (really a typical slave raid) he describes so vividly in his masterwork (see Edward O. Wilson, The Future of Life, xviii-xix).  A less naïve/more realistic emersion would remind Professor Purdy of nature’s overarching amorality, its unfairness to individuals and groups, its universal inequality and ubiquitous suffering, its injustices with no means of redress, its amoral chaos and frequent disasters.  As Annie Dillard writes, “Cock Robin may die most gruesome of slow deaths, and nature is no less pleased.” (Pilgrim at Tinker Creek, 178).

He then goes into a discussion of fear and manipulation.  We are told by some of the powers that be to fear immigrants and Islamists but not the climate change that threaten us all.   Here I think Purdy makes a slight misstep.  He is correct that the rich will be able to initially ride out the changes in the environment that are already affecting so many of the world’s poorest people and affecting them disproportionately.  Where I think he is wrong is in the belief that the rich will be able to escape this for very long, that “The world of 2100 may well be no more dangerous for them than the world of middle-class Americans in 1950 or that of Gilded Age plutocrats in 1890” (73). He is right that there is an immediate risk to them in “opening up economic life and global order to the challenges that would come from an honest confrontation with climate change.”  He calls this “willed complacency” and correctly believes that it will persist as long as enough voters identify with it.  I don’t know how to get to Purdy’s commonwealth from here, but I suspect that any realistic approach to addressing the environment will have to be the result of mortal fear to jar people out of complacency, a Pearl Harbor of the environment.

In the end, it was not the radicalism of Wendell Philips or the brooding meditations of Henry David Thoreau that destroyed slavery.  Nor was it democratic form other than the election of a transformational moderate president with a plurality of the popular vote.  It was brute force and top-down administration.  It was the Armies of the Potomac, the Tennessee, and the James.  Wherever these armies went, slavery was dead forever.  Wherever Thoreau went, we got essays and privileged observations.   

4.  The World We have Built

Here Professor Purdy introduces the ominous metric category of “technosphere” (an important term only slightly less creepy than biomass as used by E.O. Wilson in The Future of Life, (29)).  This he defines as the estimated thirty trillion ton sum total of human infrastructure on the planet, or the 4,000 tons each of us uses to get through our lives.  It is the aggregate of the extended human phenotype and is “five orders of magnitude greater than the weight of the human beings that it sustains” (82).  Without it, individual humans are “like a oyster ripped from its shell—“unaccommodated man” in the word’s of King Lear (perhaps, even more accurately, we are like cancer cells that cannot live independently and are killing the body in which we exist).  “Unaccomodated man” may also be a foreshadow of the fate of the vast majority of the world’s 7.75 billion people in the coming Mad Max world?  An unacommadated plague species.  Imbalance. 

He is correct that “Our species infrastructure is the technosphere of roads, rails, utility lines, and housing.  It also has a broader sense, in which it encompasses all the artificial systems that allow people to survive together and to reach one another for communication and cooperation” (although it would seem to be a point of contention about the degree to which an animal’s extended phenotype is ever truly “artificial,” I digress). (83)  

Purdy then expands the idea of the technosphere to a second category that includes immaterial systems like “the world’s economies, and these in turn shape the global carbon cycle, the food system, mineral extraction… and so forth” (83).  The first two categories of the technosphere are the human world.  The third category is the altered physical systems of the world—the great life support system provided by the planet.  

 Building on a central idea of his 2015 After Nature, he writes “A world that is pervasively human-made present a question: ‘What sort of world shall we make?’” Given the lack of control of the world we have toward progressive ends, one can only wonder what control he thinks we will have in the future. His goal is to make something “chosen and common” (88).  An internal reform he dismisses with the term “hack” or “a way of pursuing system-level agency in the absence of political capacity to act at the scale of the system” (88).  Here he seems to be talking about a kind of technocratic osmosis into the “infrastructure of Leviathan’s circuitry and make[s] it cleaner, faster, cheaper.”  This he rightfully dismisses.  

Then, as if resurrecting the pre-November 2016 Jed Purdy, he launches into a review of positive things government and high political ideas have done and can do (immediately before this he even gives a grudging nod to the New Deal while pointing out its warts).  He endorses the “uniquely constructive power of political sovereignty.” After such a robust assertion of political realism, and the identification of “some of the world’s most powerful states form an Axis of Denial, in which refusing to seriously acknowledge of do anything about climate change is a point of convergence from the coal industry to the religious right,” (93) he also bravely asserts a list of social issues that cynical political hacks frequently uses as wedge issues to distract from topics of survival of the planet and ourselves.  He realizes that “the state, the weaponized tool of the worst things we do—against one another and the rest of life—is also the way to a different solution” (93).

At this point, the rollercoaster turns down again and Purdy concedes “choosing what sort of species we are going to be—often feels like more of a pious wish than a potent reality” (93-94).  He sums up:

“The appeal to humanity is at once cogent and nonsensical, urgent and pointless.  The heavy facts of a fragmented and unequal world contradict the scientists’ call at every point, but they don’t disestablish it.  Here is our paradox: the world cannot go on this way; and it can’t do otherwise.  It was the collective power of some—not all—human beings that got us into this: power over resources, power over the seasons, power over one another.  That power has created a global humanity entangled in a Frankenstein ecology. But it does not include the power of accountability or restraint, the power we need.  To face the Anthropocene, humans will need a way of facing one another.  We would need, first to be a we.”

There is a lot in this paragraph.  To address his points: an appeal to humanity on a basis of the right thing to do is pointless, unless people are motivated into thinking that their survival and that of their children and grandchildren is directly threatened and that they will die unless our behavior changes in fundamental ways.  Yes, we cannot continue to go on this way.  Yes, some people and nations have contributed more to climate change.  But human beings are a plague species, and we are all to blame (see John Gray, Straw Dogs).  The idea of the human species seeing itself as single tribal category—also called for by E.O. Wilson and Adam Frank—is a nonstarter given the time in which we need to take action.

He concludes with a call for a new internationalism based on sovereignty.  In this is he correct, we will have to address the crises of the environment on a global scale from a critical mass of cooperating nations.  Beyond that his solutions seem overly optimistic.

5. The Long Environmental Justice Movement

            Purdy loses me in this meditation with what at times seems like the harsh judgment of good historical efforts.  The moralizing is not dominant, but is a spinoff, an epiphenomenon of the chapter.  In the first seven chapters of his 2015 book After Nature, Purdy shows himself to be an intellectual historian of the first order.  In this meditation he projects onto the past his twenty-first century values as if they were known and accepted as a part of a general normative morality of the earlier time, or else as absolutes that should have been known as we know them today.

He begins with a discussion of “one of the most popular and polarizing politicians in the country… Representative Alexandra Ocasio-Cortez of New York.”  He discusses the meaning of her (and Bernie Sanders’s) self-inflicted ideological tag of “democratic socialist.”  He states that this term is used to characterize “that we have in common the things we choose to share together, and these things—good schools, good transport, public parks, and medical care for all—make a shared worlds.” 

Here, Purdy (like all those who approve of this term) reveals a tin ear for practical politics.  For many Americans over 40, the word “socialism” is political poison.  When we look at the comparative happiness and health of the Scandinavian nations, we realize that socialism need not be a synonym for the USSR under Stalin, but many voters still interpret it that way.

Given that all of the things Purdy lists under “socialism” also fit comfortably under a designation of “social democracy”—e.g. the New Deal—why not use this less contentious term?  After all, another characteristic commonly ascribed to socialism is the takeover of the machinery of the economy by the government, and yet has Ocasio-Cortez or Sanders advocated such a position?  The proposed environmental program supported by Cortez even adopts the language of social democracy: the “Green New Deal.”  So why not call oneself a “social democrat”?  Stay a bit.

The answer may be that the original New Deal was an “industrial [and financial, and agricultural], racially exclusionary, male-centered program” (admittedly with some “potential” that could be reworked into a genuine commonwealth).  A few pages later he repeats that the domestic programs of Roosevelt were “patriarchal and racist.”

To this I would ask Professor Purdy: “patriarchal and racist,” relative to what?  Of Course Roosevelt should have done more for women and African-Americans, but what were the situational dictates and constraints under which he was operating? Let us concede that all times before the present in this country were racist and that even today we have a long way to go. The question is: how does the New Deal measure up relative to the period in which it was implemented?

Did the Roosevelt administration not hire more women, Jews, and other minorities into high-level positions than any before it and more than some that followed?  Were women like Frances Perkins and Eleanor Roosevelt not movers and shakers in the administration (to say nothing of Missy LeHand, the trusted gatekeeper and intimate of Roosevelt’s inner circle and member of his “Cufflinks Gang”)?  What about Roosevelt’s “Black Cabinet” and the fact that by 1935 “one-third of all African Americans were receiving some kind of Federal help” (quoted from Ken Burns and Geoffrey Ward, The Roosevelts).  Was it not the Roosevelts who arranged for Marian Anderson to sing on the steps of the Lincoln Memorial after she was blocked from performing at Constitution Hall?  Was it not Franklin Roosevelt who issued Executive Order 8802 that created the Fair Employment Practices Commission (and if a cultural southerner like Harry Truman could desegregate the military in 1947, it is likely that Roosevelt would have done so at least as quickly had he survived his fourth term)? Were all of these things not regarded as socially progressive and even radical for their time?

In order to appreciate what Franklin Roosevelt might have done in a better world or a different time, we need only look at the activities of his wife.  Eleanor Roosevelt was probably a quarter of a century ahead of most Democrats on issues of civil rights.  She and the reaction generated by her writing and activities were a barometer for what the president could and could not hope get away with.  Why not support civil rights more vigorously than he did?  Because the Democratic Party was not only the party of labor, it was also the party of the deeply racist “Solid South.”  Should Roosevelt have initiated civil rights bills that would have no chance of making it through both houses for another 25 years at the expense of a war footing necessary to defeat Hitler and the Imperial Japanese?  As war loomed, should he have risked losing an election?

Are words like “racist” and “patriarchy” terms that F.D.R, or others at the time commonly used and understood in the sense we use them today with their powerful moral implications?  Are these terms—again, in their modern usage—categories in which people in the mainstream could have consciously included or excluded themselves?  To paraphrase a commentator on a similar topic, calling FDR a “patriarch” is a little like saying that Jesus was a member of the Elks Club.  By today’s standards, Lincoln was a racist and Purdy references him favorably.  It was this racist backwoods lawyer who more than anybody was responsible for destroying slavery and not Wendell Phillips, who kept slavery in the forefront of his mind.  Lyndon Johnson was a racist even by the standards of his day but also pushed the Civil Rights Act, Voting Rights Act, the Fair Housing Act, and the programs of the Great Society through Congress.

My reading is that the New Deal was a leap forward in progressive policies and for someone to impugn Roosevelt’s domestic programs in such casually strong language is not constructive.  Ironically, although the New Deal was far less socially progressive than the modern Democratic Party, it was economically more progressive than the party’s mainstream of the past 30 years.  Purdy concedes that the environmental and health and safety goals of legislators of the late New Deal Paradigm and its allies in organized labor “would seem fantastical today” (90).  It is demoralizing to see the gains of the past characterized in disapproving terms.  Here, as elsewhere, the perfect is the enemy of the good just as the utopian is the enemy of the possible, and radicalism is the enemy of real progress.  

Purdy understands history, understands the conservative nature of much of the country and yet still embraces the word “radical” and takes it out for a walk in its various forms throughout his book (he also has an affinity for the vague intensifier, “deep”).  It is unclear how he hopes to sell his “radical” ideas to a mainstream necessary to win over in order to win elections and to enact effective legislation for the environment.  

In this chapter also, the author discusses Aziz Rana and the important observation of “the two faces of American freedom,” that “From the beginning, the country was built on a more radical (that word again) respect for the equal freedom of its insiders—white male citizens—than any other in the world.  At the same time it was among the cruelest in its domination and exploitation of ‘outsiders,’ especially enslaved and indigenous people, women, and those who did not fit its gender and sexual norms.”

There is a lot in this passage, this dichotomy.  Without defending the obscene practices of slavery, the killing and displacing of native peoples, and the racism that remains a living feature of our nation, one could present a plausible interpretation of the history of the United States also characterized by the expansion of the franchise and rights.  Did the Northern states not attempt to correct an imperfect constitution via fratricidal war all of whose causes go back to either slavery or political issues directly related to the spread of slavery and whose conclusion brought about the death of slavery? There is of course the unforgivable national abandonment of Reconstruction, the great lost opportunity, the lost revolution. And there is Jim Crow.

But history is characterized by numerous countervailing currents. Is the U.S. history of the late nineteenth and early-twentieth centuries not in part the story if an ultimately successful organized labor movement (now largely undone by its enemies and abandoned by former allies) and a noble and partially successful civil rights movement? Is our own time not characterized by an ongoing women’s movement and an inspiring Black Lives Matter movement attempting to rid our guilty nation of its continuing ingrained racism?  Does Professor Purdy agree with the claim that the legal status of most women at any time in our history is really “among the cruelest” relative to the treatment of women in traditional cultures in places like Southwest Asia, much of the Middle East, East Africa, and nineteenth-century China?

Purdy goes on to address Theodore Roosevelt, the man most singularly responsible for “America’s best idea,” the creation of more than 230 million acres of parkland (T.R. set aside more than 100 million acres himself: six national parks, 18 national monuments, 51 bird reservations, and 150 national forests), and his unsavory friend, Madison Grant.  Apparently “nature was worth saving for its aristocratic qualities; where there were lacking, the conservationists were indifferent.” (114)

Although there is some truth to this observation in regard to the preservation of spectacular landscapes like Crater Lake, Devil’s Tower, the Grand Canyon, and the Sequoia groves, what about Pelican Island?  What “aristocratic” qualities do white pelicans exhibit and why did T.R. equate killing of them with murder? (Douglas Brinkley, The Wilderness Warrior, 211).  What was so particularly noble about the terns of Tern Island that led him to declared it a preserve?  Although the snowy egret is a beautiful bird, “aristocratic” is not the first adjective the leaps into my mind when I see one.  And yet T.R. made a number of Florida islands into sanctuaries to save them and other birds from the fate of the Carolina parakeet.  The egret population had been decimated by a demand for plumes for women’s hats.  

My reading of T.R.’s conservation efforts is that he saw the wilderness as a thing of great intrinsic importance to be preserved.  In a more personal sense, he saw it as a place in which to test oneself and wished to set aside large portions of wild areas for ordinary Americans to have the kind of experiences that he believed shaped his own character (T.R. may not have had the common touch, but his life among the cowboys in the mid-1880s and the eclectic makeup of the Rough Riders suggests an affinity for ordinary people).  TR’s ethos of preservation probably has as much to do with his own intellectual interests as an amateur naturalist and the nineteenth-century notion “manliness” than it does with class considerations.   

Thus various strata and lineages leading to a modern liberal perspective are tainted as excrescences of a racist, elitist, patriarchal past: the magnificent conservation efforts of T.R. are smeared as “the last redoubt of nobility in a leveling and hybridizing democracy.  They went to the woods to escape humanity.” Why then make a gift of 230,000 million acres of preserved nature to that mass of humanity? Even John Muir is written off as a “romantic naturalist,” a misanthropic purist, and bigot.

The author also dismisses Paul Ehrlich’s very real concerns about overpopulation—the bedrock crisis of environment upon which all others are incumbent—as “misanthropic,” and one is left wondering whether the mere acknowledgement of the fact that humans have become a plague species makes one a misanthrope in Purdy’s estimation. As for the eugenics of Madison Grant and Gifford Pinchot, it is important to note that with the pendulum deep on “nature” side of the “nature versus” debate during the early 1900s, these ideas were frequently supported by progressives of the time including Margaret Sanger and Louis Brandeis, who signed on to the 1927 Holmes opinion in Buck v. Bell. On a side note, the times in which Jed Purdy and I live are far more involved in genetic tampering than the early twentieth-century.  We just don’t call it “eugenics” anymore.

That I have digressed so far as to defend a mixed nature like Theodore Roosevelt illustrates the problem of strong presentism and acontextual history: it risks alienating non-radical progressives who might otherwise be sympathetic.  Chris Hedges makes a similar mistake in America: The Farewell Tour (301).  I am defending T.R.’s conservationist impulses and accomplishments, and yet most of what the author has been trying to get across in this meditation is tainted or lost on me altogether.  It also seems to be bad form to insult those upon whose shoulders one stands, to include the Roosevelts—the most successful progressives and conservationists in U.S. history. Perhaps this point of view, like Roy Scranton’s belief that the Second World War in the South Pacific was genocidal represents the van of a new wave of historiography denouncing everything that has come before as criminal—the apotheosis of the blame game of superior people. By standards of a future time, we may be be found wanting as the people who ruined the entire planet.

Purdy’s moral frankness therefore risks undermining his own program in terms of practical policy.  In the meditation “This Land is Our Land” he perceptively diagnosis the plight of the powerless on the land with real sympathy and empathy.  He must realize that many of the same people harbor resentments about the attention that modern social (as opposed to economic) progressives lavish on identity politics at their expense (see for instance, Jean Bricmont’s “Trump and the Liberal Intelligentsia: a View from Europe,” The Counterpunch, March 30, 2016).  There appears to be a disconnect in Purdy’s understanding, between his sympathy for the dispossessed and its relationship with the radical nationalism he rightfully despises. If we are going to solve the crises of the environment and the economy, we must make the problems and the solutions cognizable and congenial to a great majority of people—make the people a part of the solution, rather than objects of derision.  We must bring people together.

The point, as so many others have noted before, is that when people fling epithets like racist and racism at ordinary people and well intended programs of the past, they have not only ended any possibility of further discussion, but have also written-off these people as irredeemable.  This obviously does not include people who embrace racism, who should be denounced in the strongest possible terms. But do we really want to write-off the impressive historical first step in American economic progressivism, its bold experimentation, and earlier efforts at environmental protection?  In order to address the environment, we will have to close ranks and entirely reconfigure the human relationship with the planet.  And you can’t win people over with name-calling.    

Purdy may well be a foreshadow, a bell weather, for an emerging outlook that will come to the fore as the Millennials rise into positions of academic and political leadership.  Some of the young people I have spoken with in my classes and elsewhere embrace a powerful utopianism that is heavy on the speculative social philosophy and moral judgment and harsh in its historical understanding.  Some are true believers who see things in terms of either/or moral absolutism.  The implication here, to paraphrase George W. Bush, is that you are either with them or against them.  People who disagree with them are to be defeated rather than won over.  If the climate crises turn out to match the worst predictions, we can only assume that these true believes will advocate violence against obstructionists.  From my vantage point this seems more likely than an ideal commonwealth materializing somehow.

As Regards the New Deal, I realize that I have focused unduly in this section on a few lines in which the author likely intended no offense to mainstream progressives with the suggestion of building something like an ideal Green New Deal out of the potential of the original New Deal with all of its warts.  These I have probably blown up beyond all proportion.

Chapter 5 Forward

 “The Value of Life,” is a magnificent lecture, a sermon of reason and humanity.  Here he shines—soars—although, as with reading science fiction, you have to take it in its own terms and suspend disbelief, until he presents his solutions that is.  I am still curious about what this last chapter is a “Forward” to.  Hopefully it is for a future magnum opus in which he spells out the realistic measures of how to implement his commonwealth point by point.

He begins by discussing the false assumptions of capitalist economics and theories of value and price.  Free market capitalism tells us that price can be know and quantified and that it is the result of freedom—the exerting of priorities as choices in a marketplace.  “By contrast, a theory of value would be totalitarian… Freedom and equality cannot tolerate a public theory of value, but price is their favorite child.” (142-143)  It’s a fancy way of describing supply and demand.  Purdy then demolishes this outlook with a power that I have seldom seen before, and it was shocking in the way that the questioning of fundamental assumptions is always shocking when done well.  

To the contrary, Purdy tells us, the economy “does embrace a theory of value and is driving our slow but accelerating disaster for both human and nonhuman life” (the delusional detachment of economists and economics from the natural world is also a point that Edward O. Wilson discusses in The Future of Life, which he coincidentally begins with a letter to Henry David Thoreau, 22-41).  The next three pages outline how the economy imposes value on our lives, what we produce, resources, the living world, and in doing so reveals itself to be a de facto  “totalitarian system of value” whose systems betray the ideals they supposedly uphold, of the equality and freedom of human beings.  They impose a flawed and destructive theory of value.”  (145-146).  Everybody—especially economists and businessmen/women—should read this.  He concludes these moving passages with the encouraging “We have made a world that overmasters us.  Some of us have learned to call it freedom, and others call it a sin.  But the reality is both worse and better than that.  It is the sum of human choices and powers, and those—and only those—can remake it.”  Bravo.

Although his observations equating the values of capitalist economics with the oppression of a totalitarian system are in my opinion overstated, at this point I was engaged and couldn’t wait to read about his solution.  It is the commonwealth.  This is where he loses me in a final sense with “a way of living in which our survival and flourishing do not prey constantly and involuntarily on the lives of others…”  This would involve a “deep reworking of two intertwined infrastructures, the economy and the material technosphere.”  In the next sentence he tells us what it would take to do this.  “If we change those, we will have to change human nature and begin a kind of peace with one anther and the rest of life.”  Oh, is that all?  

That we will have to change the human relationship with nature and ourselves is undeniable.  But for me the “changing of human nature” position is a categorical deal breaker, a kind of moral perpetual motion machine (and human nature is the moral second law of thermodynamics): it might be possible, but it is yet to be proven, and no existing one has ever been built.  As the late Tony Judt observed: “If we have learned anything from the 20th century, we should have grasped that the more perfect the answer, the more terrifying its consequences.” (Ill Fares the Land). 

My reading of history is that the most successful political programs are those designed to accommodate human nature while preserving rights and equality (as the Roosevelts tried to do) and that the most notable failures have been attempts to perfect human nature.  Given that that scientific means to alter human nature may now exist (read: genetics), the danger of well meaning utopianism is even greater than it has ever been.  For a true believer, “utopian” means a plan for a perfect world; for a realist historian, it is an adjective to describe the ideas behind the Soviet Union and Nazi Germany.

The other problem with Purdy’s outlook is its base of moral rationalism.  All programs thus founded—whether it be Marxism or the Chicago School or Law and Economics—ignore the fundamental reality that people are not primarily rational or predominantly good (and morality is located more in the realm of the passions than in reason).  The cut-and-dried tenets of moral-rational programs do violence to a nuanced understanding of the reality of human moral and social complexity.

Yes, predatory capitalism creates disparities, winners, and losers, in a similar way that a biosphere, in addition to symbiosis and the altruism driven by pressures of group selection requires the death and suffering of individuals.  Yes, the greatest American presidents have attempted to abolish or minimize inequality, disparity, abuses of power, and poverty.  Theodore and Franklin Roosevelt attempted to bring the worse abuses of capitalism to heel and to regulate the economy to better serve the public’s interest.  But to equate political equality/inequality to de facto economic parity/disparity is a bridge too far for any realistic political calculus.  It is also all-too easy to write about this and to make this equation into a literary exercise of eloquent utopianism like Thoreau and Emerson, who were content to write and talk about slavery.

Practical questions and objections to Purdy’s commonwealth abound.  Would the egalitarianism and altruism necessary for this plan be voluntary—could one opt-out?  Would such dissent and nonconformity be tolerated?  If not, then what does that say about freedom in the commonwealth?  Would not the commonwealth be just as authoritarian as capitalism?  Where the law is not enforced the law ceases to exist and one is left wondering who would enforce the egalitarianism of the commonwealth and how (see generally the Soviet Union)?  If human equality—equality of intelligence, common sense, ambition and drive, artistic and musical talent, morality (height?)—is not literally true, how would it be enforced?  Stalin had some ideas on this subject.  Would the economy of a “world-renewing ecological commonwealth” in which nurses are “prized comparably to surgeons” and which rewarded “elementary school teachers comparably to professors at research universities” be a managed one or would it sprout naturally from human benevolence (see, again, the Soviet Union)?  What does the history of completely managed economies look like?

People hate to be required to tow the line and there is great differentiation between the ambition and talents among the individuals of any group.  Some people are content with what they have.  Others wish to reach for unknown spheres.  Would they be allowed to do this?  Would they be rewarded proportionally for rarer abilities or a greater contribution or the relative difficult of their efforts?  Would they be forcibly restrained if they acted on their ambition?  Again, what of dissent?  

Small, eccentric communities of likeminded individuals can collect to form societies like the one Purdy proffers.  The Shakers and other religious utopian groups might serve as examples.  But how would you sustain this at a national level in a diverse and populous nation?  Where are the Shakers today?  Moreover such conformity tends to undermine progress.  Would the Amish have ever produce penicillin, the Works of Shakespeare, Special or General Relativity or quantum mechanics, or advanced theories on the environment?  How would the Amish have stood up to Hitler?  How do you scale up a homogenous community like this?  These groups may be dissenters from a national norm, but within their groups they are conformist to the extreme.  Would nonconformity be permitted within the commonwealth?  What if he non-conformists are aggressively against and resist the commonwealth?   

These are the easy questions, and until one devises and equally elegant program for implementing it and then scaling it up to the nations of a world heading for 8 billion people, it is essentially a discussion about either unicorns or more likely, monsters.  It makes no sense to have millions of people conform to a program based on nonexistent standards and a stipulation of perfecting human nature.  Anybody who advocates such a monolithic view to be generalized into a working system that would include all people is not in touch with political reality.    

Purdy’s model is heavy on the altruism, but without specifics about how to implement it and with only the vaguest of economic details (what exactly would “an economy where no one gets their living by degrading someone else, nor by degrading the health of the land or the larger living world” entail as a practical matter?).  How can he be sure that his plan would lead to the “the flourishing of everyone and everything would sustain the flourishing of each person” anymore than Marx could not imagine the worst abuses of Stalin?  What exactly is “living in deep reciprocity as well as deep equality,” and even with a vague assertion of some kind of depth, how would a basis for such a life be put in place?  The world is a complex interplay of good and evil that is impossible to completely sort out.  Very bad things come from well-intended programs (making utopian efforts a nonstarter), and sometimes good things come out of evil efforts.  The only thing we can be sure of is that the commonwealth, if implemented, would not work as intended.    

We should always be skeptical whenever a brilliant theorist, an ideologue with his or her own plan presents a clear, pleasing vision about how a society, the economy, and government should be ordered.  Government may be based on a general outline, but it is always imperfect—an overlay allowing for a naturalistic process of trial and error and adaptation.  The U.S. Constitution was a very impressive document by eighteenth-century standards, but took a war that resulted in the deaths of perhaps a million people to work out its flaws.  There is no utopia, there is only piecemeal social engineering (see Karl Popper, The Open Society and its Enemies). There are no clear and applicable universal standards and there are few things more dangerous than a true believer with clarity of vision.  Even with all of its wonderful propositions, I believe that if Purdy’s program was put in place, it would quickly become something unintended, something oppressive.  As Hemingway writes, “All truly wicked things began as innocent.”  Henry Adams puts the blame on the author of the ideas and actions: “It is always the good men who do the most harm in the world.” (The Civil War, Geoffrey C. Ward, 284). Purdy is like a tragic hero, and his excessive optimism is his flaw. As a superhero for a kind of progressive academic sensibility, his idealism is his kryptonite, and he carries it with him.

In spite of his powerful understanding of relationships of the weak and powerful to the land and his broad and deep understanding of history, the law and legislation, what Purdy does not seem to realize is the relationship of human nature to power.  In human society, power aggregates, and the elite tend to separate themselves from the weak and favor their own and their own interests.  New elites tend to acts just as poorly as the previous landlords.  The elites in all communist nations (e.g. the Soviet Nomenclatura) favored their own over the masses.  A simplistic argument can be made that, even when its applications are generous and altruistic, the whole purpose of having power from those who possess it is to help their own. Rarified visions expressed with grace and refinement will not uproot the realities that underlie politics as an expression of power. Thus Purdy is reminiscent of Noam Chomsky: a brilliant diagnostician without a realistic prescription.

Can the weak be protected?  Yes—the fact that the law is primarily a power enterprise and a tool for the elite to serve their own interests does not mean that it cannot be generous and high-minded, and, if a nation’s elite is wise, it will be generous.  This was also known by early legal positivists like Brooks Adams. (James Herget, American Jurisprudence, 131-134).  The law is an external set of rules; morality is the internal impulses that rise up in us in response to events.  Again, where the law is not enforced, the law ceases to exist (the law requires a written or understood rule, general compliance, and enforcement; if any of these elements are missing, the law is not extant as a practical matter—and the purpose of the law is fundamentally practical).  The question is how the poor, the disenfranchised, the unrepresented minority can use the law to their favor.

The weak in society can be protected by laws and by the enforcement of those laws.  Excluded minorities attain rights by increasing in their political power, by the increase of recognition and sympathy for them in society at large.  They can demand recognition and they can appeal to the morality and sense of decency of the majority, and in a representative system this sometimes works (as with the Civil Rights Movement of the 1950s and ‘60s).  Shifts in demographics manifested as changes in ratios of power hastened by appeals to morality may succeed.  Moral arguments and appeals to basic fairness can be made to increase sympathy, recognition, and power, but ultimately it is about attaining and wielding power that brings political equality.  French Algerians have made some progress in attaining rights because they now exert some degree of influence; French gypsies have not.   

The twin moral concepts of our system, equality and freedom, have a kind of Cain and Abel relationship: the more freedom, the less equality; the more equality is enforced, the less freedom.  Like the conceptual non-identical twins of price and value, these are siblings that don’t play well together.  They are values fundamentally at odds with each other and their rivalry make governance in this nation a constant balancing act, or rather, they keep the Republic in a precarious state of imbalance, like an inverted pyramid.  The rift between equality and freedom make balancing a necessary condition lest the Republic collapse (similar to the struggle between freedom and security, but that is another story).  Governing in a free society is a constant process of balancing and fine-tuning.     

Conclusion

Does Purdy offer a workable plan?  What exactly is the “commonwealth” as a proposition?  Is it a practical model?  A stopgap or half measure devised for a time beyond the looming environmental catastrophe?  Or is the commonwealth just more of the same: vague idealism in the face of an unfolding catastrophe?  Is it a preface to a post-apocalyptic eschatology—a handy blueprint for the calms after the storm? The world after the Flood?  The bare bones outline handed down from a great prophet from before the collapse?  Is it wishful thinking—the hypothesis of an untestable thought experiment—or the “The World according to Jed Purdy, if He were God”?  Of what good is this?  Until we have details, it is all just as discussion of unicorns.  Until we talk about first steps and how ordinary people will generate wealth and a means to live, I suspect we are talking about building castles in an increasingly carbon-suffused sky. 

Ultimately, Purdy is a chauvinist for the species: a tribalist of a species-wide chosen class—a “we”—that includes all human beings, a universalism within the limits of the human genome.  As with Adam Frank and E. O. Wilson, Purdy believes that all people must come to accept all other people as their brothers and sisters if the world is to be saved.  If this is the only chance we have, then the game is up given the timeframe with which we are dealing.  It is a tenet of mine that when a prescription is necessary but rendered impossible by existing political realities, then it is the political system and not the remedy that is unrealistic.  The caveat is that the prescription would have had to be workable if its implementation were politically feasible.  Even if the nations of the world were positively predisposed to attempt such an audacious undertaking—and the powerful were inclined to voluntarily relinquish their wealth and power or let ordinary people vote it away—there is no reason to assume that the great majority of people would be inclined to such a world order. The unsettling political reality of our time is that most of the populist revolutionary fervor appears to be on the far right.

If the survival of human beings and the planet depend upon a universal mutual sympathy of all humankind suddenly coming to the fore as our dominant characteristic, then I fear we are finished.  If people can be brought together in a shotgun marriage of cooperation by a “Pearl Harbor of the environment,” then perhaps there is a chance.  There is something morally suspect about the idea of imposing an involuntary system on people regardless of its motives. We must be careful not to impose on the world a hypothetical model of commonwealth based on ultra-Enlightenment principles that might not be a part of local traditions. The idea of Jane Jacobs of a world order based on sustainable naturalistic production regions in turn based on local customs, traditions, and natural and human resources seems far more practicable (again, Jane Jacobs, Cities and the Wealth of Nations).  As with Purdy’s model, the question is how to get there from here.

And so I recommend Professor Purdy’s book without agreeing with its prescriptions and in spite of my doubt about the plausibility of his commonwealth as a model.  History is discussion and policy should be based on historical understanding.  In my opinion Purdy has a penetrating understanding of history, but fails to apply its lessons realistically as a basis for his model.  Regardless, I hope that many others will read his book and that its ideas will inspire spirited discussion.

Yes, we have seen Plato’s Republic, Moore’s Utopia, and Marx’s worker’s paradise and all have been nonstarters or failures.  Now we have Purdy’s commonwealth.  We shall see.  (Apologies to Georges Clemenceau). 

I hope that this review has not come across as unduly harsh or a piling-on in opposition to a noble effort with the highest of purposes: to sketch out a basis for a better world.  I have not tried to be mean, and I recommend reading this book.  From what I infer from his writings and speeches, Professor Purdy is an impressive person, a high-minded rational man in an increasingly mean and irrational time.  There are few contemporary commentators whose writing and abilities I admire more.  If I didn’t have respect for him, I would not have taken the time to write this.

1917

Reviewed by Michael F. Duggan

Spoiler Alert: You will likely be able to piece together the plot of this movie from this review.

Yesterday I saw 1917, the blockbuster by Sam Mendes (American Beauty, Road to Perdition, Revolutionary Road) that has walked away with a trenchful of awards, a trend likely to continue through the Oscars.  For anybody with an interest in the Great War, it is a must see.

Above all it is a brilliant technical achievement: a movie that seems to be a single shot done in one take (although knowing this ahead of time proved to be distracting as I watched closely for breaks in the shot).  Because of this, there is a seamless quality to the film that allows one to easily replay the general outline over in the mind. 

The film is conceptually related to Peter Weir’s 1981 Gallipoli (the race-against-time, WWI buddy film), and Saving Private Ryan (the search for an imperiled sibling).  As with Saving Private Ryan (another DreamWorks production) the film embraces a tangible realism that strips away the sepia tone, and the 103 years of First World War historical accretion since the purported events of the story. In terms of appearance, it is the most realistic portrayal of First World War I have seen on the big screen.  While it is as gritty as Private Ryan, it never attempts to achieve the intensity of the famous landing scene at Omaha Beach.

The film appropriates a number of devises and themes from other books and movies.  Without giving away the story, it borrows a plot twist from Mailer’s The Naked and the Dead, and there is an escape scene right out of A Farewell to Arms.  The latter scene—and a plane crash more than reminiscent of the one in Anthony Minghella’s The English Patient—leaves one wondering how they shot these segments.   

The camerawork is especially interesting: a soldier’s-eye-view that makes the range of vision both narrow at points—even claustrophobic—and wide-angle at others.  A scene late in the film where one of the protagonists runs parallel to a trench line as an attack begins is impressive.

There is a single scene that made no sense to me as two British soldiers on a high-priority, time urgent, mission whose success will save an entire regiment, feel compelled to stop to make sure that a deserted French farm is really deserted.  There is also a nighttime chase scene in a ruined French town that is right out of a nightmare.   

My only other criticism is that at times the front seems too quiet and depopulated, but then this was the period in 1917 when the Germans withdrawn from their front line trenches to occupy the Hindenburg Line (Siegfriedstellung). As with Apocalypse Now, the film’s greatest star power—Colin Firth and Benedict Cumberbatch—is limited to brief scenes that bookend the action.  In a way this is a strength and the film centers around two English Tommies, Lance Corporals Tom Blake and Will Schofield played pitch-perfect by Dean-Charles Chapmen (Game of Thrones) and George MacKay.  These two actors look the part of the quintessential British every men who fought and died—the “lions led by donkeys,” the latter of whom are appropriately played by big names.  The rest of the trappings and material culture of the First World War are causal, accurate, and ubiquitous (even down to the mean and rough-hewn depictions of British trenches relative to the cleaner and perhaps overengineered German trenches).

Overall it is a terrific film, perhaps a great movie, and, with all of the impressive technical aspects, the question is the degree to which the story itself draws you in (arts is even more about feeling than it is about technical mastery).  Although like life itself, it may be a film worth seeing once, and, unlike most other movies I like, I am not sure I will watch this one over and over again.  Time will tell.  

On Books, Personal Libraries, and the Evaluation of Ideas

By Michael F. Duggan

You can tell a lot about a person by their library; my library consists of several thousand books in no particular order.

A few days ago I had a discussion with some friends on the importance of books and personal libraries and why people collect books. I have several thousand books of my own because I am a generalist by nature and there are a number of areas on which I write. If I see an important work that I might possibly need to consult/refer to, I will buy it. This obviously has a problematic side; books take up space, and buying them is probably the closest thing I have to an actual addition (which is comparatively minor and less constantly distracting than the”new heroin”/”new nicotine” that are the smart technologies of our time).

But the buying and not buying of books has even deeper social/cultural/historical/political/policy implications. I would argue that young people today are on balance among the most intelligent who have ever lived. I have also sensed that because accessing raw, non-contextual information is so absurdly easy these days because of the Internet and the portable toys we use to interact with it, that some of those among the most recent generation or two have comparatively little understanding of the historical lineages and connections–the historical-conceptual context–between ideas and the people who devised/discovered them.

When research involved actually going to libraries and archives, comparatively few people did it. Now that it is ridiculously easy it is also trendy, but many of the “kids these days” have little understanding about how to evaluate the disembodied ideas and information they access. The Internet is a Wild West in which true ideas are frequently intermixed with untrue, partially true, narrowly or technically true-but-misleading, distorted/propagandistic, and incomplete information. As with the Western printing revolution of 500 years ago, the information revolution brings along with its many benefits the possibility of unprecedented social instability and conflict. This blog is a latter day incarnation of a small seventeenth or eighteenth-century press, its posts are pamphlets and broadsides .

As recently as a couple of decades ago, intelligent young people took pride in their voluminous personal collections. They knew the ideas and authors and their connections, interrelations, histories, pre-histories, subtleties, flaws and merits, weaknesses and strengths. They knew the ideas that they believed in and could back up their views with contextual understanding and evaluation. Some of this obviously still exists (especially among those of us who are over 40; for for any bibliophile with a local Friends of the Library, we are living is a Golden Age of low-cost, high-quality texts), but I believe that it is increasingly less common among the young. This is dangerous, especially as regards policy, which should be based more upon a broad and deep historical understanding of ideas than on pure theories and unselfcritical ideology.

For example, some young people I know from the courses I have taught, know more about the climate crises and the pitfalls of neoliberal globalization than most people of my own generation. Yet when you press them on solutions however, they may start rhapsodizing about the virtues of global Marxism and spontaneous, bottom-up plebiscite world socialism. When you reply “no, really, what’s your solution?” they double-down on models that either have never existed, have long records of trial and failure, or whose closest real-world analogs have never come close to working and have frequently devolved into totalitarian monstrosities.

I’m not sure what the answer is or even if there is a workable solution. The brave new world is here and you cannot unring a bell. Although we exist in the cognitive world and interact in the ideational world of which the cyber world is a hybrid, subset, multiplier, and accelerant, human beings evolved, interact, and function in the physical world. I hope that thoughtful young will appreciate the importance of physical books in the real world. Ideas are valuable and dangerous things, and some of the most pernicious concepts may also be among the most appealing on their face, the most seductive. In order for us to accentuate their value while minimizing their danger, we must know their histories and linkages. In my experience this is most effectively accomplished through the compiling of physical collections of books in addition to discussion with other people and in conjunction with the powerful technologies of recent decades. We must know how to evaluate ideas before we accept or reject them.

It seems increasingly likely that the task of saving the world from ourselves will fall on the shoulders of the current and rising generations of young people–we are all counting on them. Let us hope that they will avail themselves of all of the tools and understanding in order to do the best job possible.

Setting the Record Straight: Stephen F. Cohen’s “War with Russia?”

Stephen F. Cohen, War With Russia?  From Putin & Ukraine to Trump & Russiagate, New York: Skyhorse Publications, Inc., 2019.  225 pages.

Reviewed by Michael F. Duggan

With all of the partisan theatrics and bad media coverage of “Russiagate” it is difficult to know what the true state of U.S.-Russian relations is.  One can infer that it is not good, but just how bad is it and what are the causes of its deterioration?  To answer these and related questions, it is useful to get back to basics, to see what the Americans who know the most about Russia are saying about U.S.-Russian relations as a backdrop relative to the dangerous hyperbole we see in the news every day.  

There was a time when a foreign policy outlook advocating détente was considered a mature, bipartisan position in this country.  Times have changed.  Today, when a (perhaps the) leading scholar of Russian studies and history—an emeritus professor at Princeton and New York University and long-term network media Russia expert—calls for parity and respect in our dealings with Russia and simple accuracy in media coverage, he is called “the most controversial Russia expert in America today” as well as some juvenile cheap shots like “a dupe” and “toady.”

The fact that Stephen Cohen is considered by some to be the most controversial Russia expert in the United States says far more about the times than it does of Cohen’s perspective. Getting history right matters and the unprofessional name-calling and unselfcritical media interpretations of U.S.-Russia affairs are not accidental.  They amount to a kind of vulgar, propagandistic perspective that is well beyond the pall when one considers that getting history wrong can itself bring dire consequences. 

Consider for example that in the nineteenth-century Germany led the world in the philosophy, theory, and practice of academic history.  Universities in the United States sent their most promising students of history—men like Henry Adams, William Lothrop Motley, and Francis Parkman—to Germany to learn the newest ideas in historiography.  And yet the outlooks of Hegel, von Ranke, and Treitschke were problematic—mistaken—and with the help of other nations and their geopolitical miscalculations, helped launch the world into the bloodiest period of its history.

Consider also that George F. Kennan, with a sensible historical perspective, a deep understanding of his subject—Russia—and a realistic view of Soviet Marxist-Leninism and human nature, devised a grand strategy that ended the (first) Cold War more or less on schedule even after considerable modification, tampering, and outright vandalism of his idea by lesser men.  Even with such insight and understanding, luck played a major part.

Today, by contrast, it appears that the United States in its dealing with Russia is acting on an incorrect historical model—an equal and opposite eschatology (relative to Marxist-Leninism) of its own—one based on aggressive neo-liberal and neo-conservative policies, that have led us back into something like a new and potentially even more dangerous cold war.  One senses that Professor Cohen just wants to get history and news coverage right.  As we can see from the tragic history of the twentieth-century, the cost of getting it wrong is too high.

Premises

Depending on how you count them, War with Russia? is either the ninth or tenth book by Stephen Cohen (some of which he edited, co-edited, or co-authored), and it reads somewhat differently from much of his academic works.  Taken from radio broadcasts covering about four-and-a-half years from August 2014 to August 2018, the book is essentially a sequential collection of short, punchy essays that chronicle the unfolding of what he calls (perhaps a little too often) the “new Cold War”—something he has seen coming since the 1990s.  It is arranged in four chronological parts: Part I: The New Cold War Erupts 2014-2015, Part II: U.S. Follies and Media Malpractice 2016, Part III: Unprecedented Danger 2017, and Part IV: War With Russia?

His primary points are:

  • The United States and Russia are now engaged in a new Cold War that is even more dangerous than the first.  It is primarily the result of American triumphalism and its embracing of internal Russian elements that plundered the nation under Boris Yeltsin in the 1990s.  He also blames the provocative expansion of NATO far into the Russian sphere of influence and the demonization of Vladimir Putin and all things Russian.
  • Both American political parties and the mainstream press are engaging in a latter-day version of McCarthyism in which voices that dissent from the orthodox narrative condemning Russia and its president are maligned. Therefore, unlike the first Cold War, there is no robust multi-sided debate about Russia today—the media has completely abdicated on this point—and those who oppose the monolithic American position are shouted down, even by people who call themselves liberals and who for traditionally progressive outlets like the New Republic, The New York Times and The Washington Post.
  • The mainstream reporting on Russia therefore is highly inaccurate and egregiously bad, one-sided, and shrill.  Far from embracing the best traditions of investigative journalism and a fee press, coverage of Russia is characterized by lockstep conformity and a lazy acceptance of an unquestioned orthodoxy.
  • The United States, whose leaders negotiated with every Soviet leader during the Cold War, has abandoned the idea of parity in its negotiations with Russia and has given up on the idea of détente and the assumption that the other side has its own sphere of influence and legitimate regional interests.  Instead the U.S. has preferred to treat Russia disrespectfully as a pathetic, defeated country.
  • In short, the mainstream media has embraced the positions of U.S. intelligence agencies.
  • What has become known as “Russiagate” can be more accurately described as “Intelgate” and is the product of U.S. intelligence agencies.

The essays are as varied as the events of the period 2014-2018 and can be read individually or sequentially thus allowing the reader to understand how events unfolded in the order they happened.  Cohen’s fluency in Russian history and culture is humbling in terms of details, depth of understanding, and broadness of sweep.  Although certain themes do recur throughout—as he has noted himself, there is some unavoidable redundancy—the book holds together as an unfolding episodic interpretation of events during this period. 

The book is well-written and reads close to the spoken language—and although Cohen writes with clarity—this one is likely aimed at a broader audience and reads faster than previous books like The Failed Crusade and Soviet Fates and Lost Alternatives.  It is a threadbare tenet of conventional wisdom that news is “the first edition of history,” and when reading Cohen, one gets the feeling that he is writing a more settled interpretation of events as they happen.  One does not have to agree with all of his points to realize the importance of his overall point of view, his diagnoses, and prescriptions (in a recent interview, his wife, Katrina vanden Heuven, editor of The Nation, and Dan Rather dissented on a number of Cohen’s points (as do I)).

In terms of niche, Cohen may not be completely unique as the rational man as heretical expert standing against the tide and armed only with the truth in an effort to set the record straight.  But his stature as a scholar and journalist make him a Napoleonic figure on most issues regarding modern Russia and American relations with it.  Despite continuing efforts to malign or marginalize him, his point of view cannot be dismissed.  In some respects he is analogous to Alfred McCoy in regard to China (In the Shadows of the American Century) and Diana Johnston on the Balkans wars of the 1990s (Fool’s Crusade), but in some ways his timbre more of  traditional centralist.  It is the politics of our day that has made him seem like a radical (in this sense, he may be akin to Andrew Bacevich, who describes himself as a traditional conservative).  All of these commentators provide precious alternatives of in-depth historical understanding to the unfounded clichés and bubbles and misreported accounts by the corporate media.

Adopting a Mature Stance toward Russia

Why should the United States adopt a more conciliatory stance toward Russia?  For me the answer is simple: policy is about the pursuit of national interest and not the lording of moral superiority over others.  It is about trying to achieve the optimal over the maximal.  Quite simply (to paraphrase George Kennan on another topic) there is no reason why as a mature nation we cannot deal with Russia maturely as neither friend nor foe but as an important nation with its own legitimate interests and sphere of interest.  Why not treat them with the same dispassion with which we treat other major countries like Germany and Japan?  In other words we should improve relations with Russia because is makes no sense to antagonize them and because both the United States and Russia still have thousands of nuclear weapons pointed at each other on a hair-trigger, first strike basis (See Daniel Ellsberg, The Doomsday Machine, and review of it in this blog). 

To understand the dangers of treating a nation harshly after a bitter struggle one need only compare the results of Versailles with those of the Marshall Plan and the rebuilding of Japan.  Such a comparison underscores the folly of our present course.  Now add nuclear weapons to the equation.  Our baffling vilification of Russia is also driving it closer to China at a time when the latter is attempting to devise a Eurasian economic sphere that would undermine U.S. economic standing in the world possibly allowing the Yuan to supplant the Dollar as the world reserve currency.

Cohen believes that the period we are now in is at least as perilous as the most dangerous periods of the Cold War, a new low.  He sees the recent and ongoing situations in Georgia, Syria, and Ukraine collectively as the Cuban Missile Crisis-times-three.  This may or may not be an overstatement or an imperfect equation; the Cuban Crisis was a rapidly-evolving direct confrontation characterized by poor communication between the U.S. and the USSR, and an American president give horrible advice by those around him.  But when taken as a historical whole, U.S. policy toward Russia since 1991 has been unnecessarily provocative—even confrontational—and one quickly realizes the possibility that any of these crises could have easily turned into a direct confrontation very quickly.  If this is Cohen’s point, then he is probably right. And now another unnecessary crisis potentially looms with Russian ally, Iran.

As with the original Cold War, the danger today lays both in the possibility of accident and miscalculation: in mistaking boilerplate rhetoric and saber rattling (if you will excuse the mixed metaphor) for genuine brinksmanship and escalation and vise-versa (e.g. was the positioning of heavy U.S. weapons in Poland and some of the Baltic nations an escalation or mere symbolism, and how do the Russians see it?  How would the United States regard an equal and opposite move by the Russian?).  Policy makers in this nation and in Europe must realize that the Russia—like any old and proud nation—can only be pushed so far.  What we might take to be just another round NATO expansion, might be a final straw to them.  Cohen, like the rest of us, presumably does not know about the existence or the extent of back channels and secret diplomacy.

Americans may choose to live in a fool’s paradise and assume that there is always constructive communication and quiet and cooperative diplomacy between the U.S. and Russia that ameliorates all of the bellicose public posturing.  But even if this is true, it is scant comfort to those of us who believe that history is often characterized by mistakes and screw ups.  On this point, one would do well to research the origins of the informal military acronyms SNAFU and FUBAR.

When I spoke to friends about the obvious danger of American and Russian combat aircraft sharing the same airspace to bomb different sides of the civil war in Syria a few years ago, I was assured of the close communication and cooperation between U.S. and Russian planners.  I was skeptical then and am still not felicitous about the coordination of opposing military operations between the backers of proxies in a vicious civil war.  My reading of history is that accidents happen—especially in war.  Friendly fire is vastly under reported in every conflict and incidents like the bombing of U.S. forces by American planes during Operation Cobra during the summer of 1944, or the airborne units shot down by Americans in Italy the year before may serve as cautionary example. 

With the added element of relying on the cooperation of an adversary in a hot war that could easily turn into a war between nuclear powers, it becomes clear that we are playing with Promethean fire regardless of precautions.  As I have noted before, in such cases, the potential exists for an August 1914 scenario with October 1962 (+57) technology and capacity for destruction.  As a frustrated John Kennedy observed when a U-2 strayed far into Soviet airspace at the height of the Cuban Missile Crisis “[t]here is always some son-of-a-bitch that does not get the word.”  During the same crisis, U.S. destroyers rolled “practice” depth charges on nuclear-armed Soviet submarines.

History

In order to understand Russian behavior in recent years, I think those of us in this country should ask ourselves how we would feel if the U.S. had “lost” the Cold War and then a revitalized USSR began to act in an expansionist manner.  

For instance, how would the United States respond if the Soviet Union broke a vow not to move “one inch” (as Cohen has stated) into the American regional sphere of influence and then pushed the Warsaw Pact deep into Canada, supported extremist anti-US forces there in a successful effort to overthrow a democratically-elected, pro-U.S. government in Ottawa?  The mainstream media rightfully despises the thugs that showed up at Charlottesville two summers ago, yet is curiously silent about our Ukrainian “allies”—the most extreme of whom (including members of the Svoboda party) are more-or-less politically identical to them.

How would people in this country feel if after 1991 the USSR treated the United States as a defeated, second-rate nation, supported high-level bureaucrats (in Russia the Nomenclatura) as they plundered Federal pension funds and allied themselves with other internal elements that robbed our nation?  Professor Cohen has written extensively about the post-Cold War era —a period that has transitioned from news into history.  For a scholarly overview of the period from the 1980s until 2012, see: Failed Crusade, America and the Tragedy of Post-Communist Russia [2000], and Soviet Fates and Lost Alternatives, From Stalinism to the New Cold War [2009, 2011].

“A Strategic Blunder of Proportionally Epic Proportions”: The Expansion of NATO

The great misjudgment in American policy toward Russia begins late in the George Herbert Walker Bush Administration—Bush, who masterfully ended the Cold War only to start crowing about “victory” as the 1992 election loomed into sight—and then took off under Bill Clinton with a clean break from the past and the getting away from an assumption of parity in our dealings with Russia.  Emblematic of the lack of sensitivity shown by the U.S. toward Russia was the (ongoing) expansion of the North Atlantic Treaty Organization farther and farther from the North Atlantic. 

In 1990 Germany was reunited on terms aligning it with West.  Beginning with the Clinton administration, NATO has expanded eastward, initially into the Czech Republic, Hungary, and Poland (1999).  Even though these are Central European nations that have frequently looked to the West (Poland is Catholic rather than Orthodox and uses the Roman rather than Cyrillic alphabet), some critics noted that even this violation of earlier assurances looked like Western expansionism.  As George Kennan, observed after President Clinton announced this initial expansion: “[t]he deep commitment of our government to press the expansion of NATO right up to the Russian border is the greatest mistake of the entire post-Cold War era” and “a strategic blunder of potentially epic proportions.”

In 2004 Bulgaria, Estonia, Latvia, Lithuania, Slovakia, and Slovenia joined NATO.  In 2009 Albania and Croatia joined.  In 2017 it was Montenegro.  Although it is difficult to imagine a U.S. military commander saying something like “You don’t want to go into an all-out war with Russia without Montenegro on your side,” the inclusion of Eastern European nations in NATO actually makes war with Russia more likely, and now the U.S. is has a treaty obligation to defend them, even if it leads to a nuclear war.  

Russian Psychology and History

What many casual observers in the United States apparently fail to understand about Russia is that it has the geographical qualities of a massive land empire (Cohen takes the “land empire” thesis to task in Soviet Fates) and is distrustful of outsiders and more concerned with buffer zones than with far flung expansion and conquest. In my opinion, an understanding of Russia’s tragic history of foreign invasion brings the impetus of this outlook into sharper focus. It also underscores why the expansion of NATO far into the Russian sphere of influence is so dangerous.

Media “Malpractice”

Cohen also takes to task the jaw-dropping hyperbole and outright falsehoods perpetuated by the mainstream media.  On this point he is a virtual voice in the wilderness and encourages others to also call out often-repeated lies and exaggerations.  The media’s getting it wrong goes beyond laziness, error, and even cynicism into what he calls “malpractice.” This malpractice includes the unhistorical characterization of the Russian “invasion” of Crimea (Crimea has long been an official or de facto part of Russia—how does a nation “invade” a region where it has already been for more than a century-and-a-half and where perhaps 80% of the people speak the language of the “invader”?). 

Perhaps the most notable misinformation perpetuated by the press are the often personal smears against Putin himself (who supported the U.S. invasion of Afghanistan and gave President Obama face-saving cover to walk back his “red line” rhetoric about Syria).  Cohen believes that recent characterizations of Putin are on balance more severe than mainstream depictions of any of the Soviet leaders during Cold War. 

Charges of Hitlerian despotism leveled against Putin are especially perplexing when one considers that Soviet Russia lost 25-27 million people fighting Nazi Germany and the fact that the war was mostly fought and won on the Eastern Front.  Any equation of Putin with Hitler is therefore foolish, inaccurate, and dangerous; Hitler was a phobic psychopath and Nazi Germany was a rogue state with designs of ethic warfare, the extermination of entire peoples, continental conquest, and world domination (and the subtext of comparing a foreign leader to Hitler, is that he cannot remain in power, even if it takes world war to remove him).

Putin by contrast fits in well with the historical model of the Russian leader as strongman/woman (e.g. Ivan, Peter, Catherine). If the media must compare him imperfectly/superficially to a German leader, a more fitting analog would be to a consolidator and practitioner of realpolitik like Bismarck, rather than a madman like Hitler.  Like Bismarck, Putin is an unsentimental hardball realist and consolidationist with a good understanding of his nation’s vital interests (there are obvious differences as well). 

Is Putin the sensitive soul into whose eyes George W. Bush gazed wistfully?  No.  But that is not the point.  If his past behavior is any indication, Putin is a national leader with whom we can do business, and beyond a certain point will not be pushed.  Thus the danger Cohen sees in our recent policy toward Russia.  And given Soviet losses during the Second World War, and the fact that an estimated seven out of ten Wehrmacht solders who died in combat were killed by Soviet forces, ad hominen comparisons to Hitler are not only in extreme bad taste, but but are likely to poison any hope for meaningful future dialog.

One need only read the prologue of the book, “The Putin Spector: Who He is Not” and the first chapter, an essay dated August 27, 2014 titled “Patriotic Heresy vs. Cold War: to get a fair sample of mainstream “Fallacy” versus Cohen’s scholarly “Fact” about the Russian president.

“Russiagate” vs. “Intelgate”

Cohen’s most controversial position is his assertion that what the media and political parities have characterized as “Russiagate” is really “Intelgate,” that the scandal alleging collusion with Russia and its interference with the 2016 U.S. elections was in fact a conspiracy hatched and pulled off by U.S. agencies.  On this point I am agnostic; his claim seems unlikely and conspiratorial, but then we are living in strange times.  Without giving the story away—an account Cohen explains in detail—I will simply recommend reading the book and judging for oneself.

I will only add that the Intelgate thesis is, in my opinion, intriguing and suggestive but unproved.  At the very least, Cohen’s positing of this theory is a striking and singular instance of an important scholar going out on a limb and possibly staking his reputation on a single claim, even if one does not agree with it.  But even if this theory proves to be “a bridge too far” beyond the other premises of this book, the rest of it holds up well with or without it and I think Cohen’s overarching interpretation about U.S.-Russia relations and their recent mutual history is mostly correct.  

Solutions: Parity and Détente

So what are the solutions posited by a man who calls himself a “national security patriot” and a “patriotic heretic” in regard to tensions of a new cold war?  By historical standards—i.e. by the standards of the first Cold War—they are the most reasonable, the most conventional imaginable.  By the standards of the locked-brain ideology of the Washington Consensus, they are radical, scandalous, and perhaps even amounting to a kind of appeasement: the embracing of détente based on parity and respect over the baffling and ill-considered provocation that has led the two countries into a new cold war that could go hot in the worst possible way at a moment’s notice.  As for loaded epithets like “appeasement,” I would contended that a moderate and rational approach to a potentially dangerous nation from a position of strength—the position of a nation with a one trillion-dollar military backed by thousands of nuclear weapons—could be better characterized as measured maturity in the interest of maintaining peace.  Besides, what is the reasonable alternative? 

Cohen’s solutions are straightforward: the United States does not have to be allies or enemies with Russia, but it should deal with them productively and for mutual benefit via détente, as we did in negotiations with communist leader between the 1950s and the end of the Cold War (if we were willing to talk to communists, why not to Putin?).  We should deal with them fairly and evenly like we would any other nation.

Cohen’s point of view might be akin to that of de Tocqueville in recognizing that, due to their size, geography, history, and national interests, the United States and Russia will never be close friends.  His view is like that of Kennan, that as a mature nation we should strike a balance with them as neither allies nor enemies, a balance that from our perspective recognizes that they (like the U.S.) are entitled to a sphere of influence (an unfortunate fact that will persist as long as there are large and powerful nations). 

There is certainly a long and well-documented set of historical precedents for fair dealing with Russia, even if we regarded their system to be a bad one in terms of rights and representation.  As with Eisenhower, Kennedy, Johnson, Nixon, and post-Reykjavik Reagan, and the first president Bush prior to the summer of 1991, Cohen advocates a strategy of live-and-let-live détente based on an assumption of parity and mutual legitimate interests.  But even a policy of mutual accommodation does not guarantee peace—there were numerous times when the first Cold War almost turned hot—and some historians are now saying that in retrospect the fact that the world survived the first great U.S.-Russian struggle is nothing short of miraculous.

There appears to be no agenda to this book other than to set things straight in the name of accurate reporting and policy that promises a less dangerous course.  Cohen seems to be a man dedicated to the truth, a clear sighted person in an age of The Emperor’s New Clothes who sees clearly when others are content to not see at all.

I recommend Cohen’s new book, War with Russia?.  It reads quickly and can be read profitably in conjunction with his academic writing on Russia, especially Soviet Fates and Lost Alternatives (on the post-Cold War period and the subsequent events leading to the new Cold War).  The overall message appears to be that the unnecessary ratcheting-up of tensions by heavy-handed policy and media misrepresentation risks transforming the new cold war into a hot war between major powers that would likely turn nuclear.  He might have a point.

For discussions with the author about his new book, see the links below.

John Paul Stevens: The Last Maverick

By Michael F. Duggan

It is a favorite theme of this blog: although we live in a time of ideological division, there is an unspoken consensus in the Establishment left, right, and “center.” We live in a time that despises mavericks in public office, and now the last maverick of the Third Branch is gone. If there were two ways of seeing a case or a constitutional question, Stevens would think of a third, fourth, and fifth way that nobody had ever thought of before. And then he would convince others he was right.

A native of Chicago, he witnessed at the age of twelve, Babe Ruth’s “Called Shot” home run, and had the framed scorecard to prove it. Stevens attended the University of Chicago and majored in English. One of his professors was Norman Maclean, who would got on to write A River Runs Through It.

At the urging of the university’s dean, Stevens entered the United States Navy the day before the Pearl Harbor attack. He would serve in the communications intelligence section (Op-20-G), and received the Bronze Star for his contribution. When he retired from public service in 2010, he was one of the last WWII veterans working for the U.S. Government (the only others that I know of who were still serving at the time of his retirement were representatives Ron Dingell and Ralph Hall, and Senators Daniel Inouye, Daniel Kahikina, and Frank Lautenberg).

After the war he attended the Northwestern University Law School on the G.I. Bill where he achieved the highest GPA in the school’s history. He clerked for Supreme Court Justice Wiley Rutledge during the October 1947 Term. He went on to a successful legal practice and was appointed to the United States Court of Appeals for the Seventh Circuit in 1970. He married twice and had four children.

In late 1975 Stevens was nominated by President Ford to fill the Supreme Court seat vacated by the retirement of William O. Douglas (the seat previously occupied by Louis Brandeis). Relative to the progressives of the Warren and early Burger Courts (Warren, Black, Douglas, Brennan, Marshall), he was regarded to be a moderate. By the time he retired 34 years, 192 days later, he was the leader of what was by then seen as the Court’s progressive wing (it is likely that Stevens did not change so much as did the American political landscape and perceptions of the liberal-conservative spectrum).

At the age of 90, he was the second oldest Justice to retire from the High Court’s bench, and could have easily beaten the record held by Oliver Wendell Holmes, who had retired when a few months older. He wrote three books over the age of 90. His autobiography was issued only a month or two ago. He was the third longest-serving justice in U.S. history after his predecessor, William O. Douglas, and Stephen Field.

Stevens brought to oral argument a keen analytical mind, a deep and profound humanity, good humor, unfailing courtesy, and a perennial bow tie. A progressive Republican, his death marks the extinction of a noble political genera. He is one of those rare people whose passing makes the world seem less rational. It was demonstrably better with him in it and seems less hopeful without him. Although it is still early, and although he was in many respects a standalone figure, his historical reputation is secure and it is safe to call him a great jurist. Without a doubt, he led a great life and we are all better off because of him.

Edward O. Wilson at 90

Michael F. Duggan

The biologist Edward O. Wilson turned 90 on Monday, June 10th.

Arguably the most influential living scientist, he is a world authority on ants, the “father of sociobiology” and a leader of the biodiversity movement who coined the terms “biophilia” and Eremozoic (the latter to describe the geological period dominated by human beings sometimes called the Anthopocene).

In the 1970s he gave much needed firepower to the “Nature” side of Nature/Nurture discourse and infuriated a lot of social “scientists” (not long ago he wrote that “[h]istory makes no sense without prehistory, and prehistory make no sense without biology”).

He has written 29 books–eleven of them since the age of 80–and won two Pulitzer Prizes (one for the classic 1978 On Human Nature). His newest book Genesis: The Deep Origins of Society, just came out. His 2016 book “Half-Earth” tells us what we need to do to save the planet.

In his twin volumes, The Social Conquest of Earth and The Meaning of Human Existence–developing some ideas of Darwin from The Descent of Man–he posits the view that human morality is the result of tensions between the pressures of individual selection (and thus selfishness) and the pressures of group selection (altruism/empathy). He believes that the success of the human species (success to a fault) is largely due to our eusociability, a cooperative social structure (strategy?) shared in very different form with social insects like ants and termites, creatures that have also taken over the world.

One need not agree with him on all points he has made over a long and illustrious career to recognize his importance.

I met him in the early 2000s and he inscribed my first edition copy of On Human Nature. Seemed to be a first-rate guy. Happy birthday.

Six Books on the Environment

John Gray, Straw Dogs

Roy Scranton, Learning to Die in the Anthropocene and We’re Doomed, Now What?

Jedediah Purdy, After Nature

Edward O. Wilson, Half Earth

Adam Frank, Light of the Stars

Reviewed by Michael F. Duggan

Modern urban-industrial man is given to the raping of anything and everything natural on which he can fasten his talons.  He rapes the sea; he rapes the soil; the natural resources of the earth.  He rapes the atmosphere.  He rapes the future of his own civilization. Instead of living off of nature’s surplus, which he ought to do, he lives off its substance. He would not need to do this were he less numerous, and were he content to live a more simple life.  But he is prepared neither to reduce his numbers nor to lead a simpler and more healthful life.  So he goes on destroying his own environment, like a vast horde of locusts.  And he must be expected, persisting blindly as he does in this depraved process, to put an end to his own existence within the next century.  The years 2000 to 2050 should witness, in fact, the end of the great Western civilization.  The Chinese, more prudent and less spoiled, no less given to over-population but prepared to be more ruthless in the control of its effects, may inherent the ruins.

                        -George Kennan, diary entry, March 21, 1977

No witchcraft, no enemy had silenced the rebirth of new life in this stricken world… The people had done it themselves.

                        -Rachel Carson

We all see what’s happening, we read it in the headlines every day, but seeing isn’t believing and believing isn’t accepting.

-Roy Scranton

Among the multitude of voices on the unfolding environment crises, there are five that I have found to be particularly compelling.  These are John Gray, Jedediah Purdy, Roy Scranton, the biologist, Edward O. Wilson, and most recently the physicist, Adam Frank.  This post was originally intended to be a review of Scranton’s newest book, a collection of essays called We’re Doomed. Now What? but I have decided instead to place that review in a broader context of writing on the environment. 

I apologize ahead of time for the length and roughness—the almost complete absence of editing—of this review/essay (the endnotes remain unedited, unformatted, and incomplete, and a few remain the the body of the text).  This is a WORKING DRAFT. The introduction is more or less identical to an article of mine that ran in the CounterPunch in December 2018.  

Introduction: Climate Change and the Limits of Reason

Is it too late to avoid a global environmental catastrophe?  Does the increasingly worrisome feedback from the planet indicate that something like a chaotic tipping point is already upon us?  Facts and reason are slender reeds relative to entrenched opinions and the human capacity for self-delusion.  I suspect that neither this essay nor others on the topic are likely to change many minds.   

With atmospheric carbon dioxide at its highest levels in three to five million years with no end in its increase in sight, the warming, rising, and acidification of the world’s oceans, the destruction of habitat and the cascading collapse of species and entire ecosystems, some thoughtful people now believe we are near, at, or past a point of no return.  The question may not be whether or not we can turn things around, but rather how much time is left before a negative feedback loop from the environment as it was becomes a positive feedback loop for catastrophe.  It seems that the answer is probably a few years to a decade or two on the outside, if we are not already there.  The mild eleven-thousand year summer—the Holocene—that permitted and nurtured human civilization and allowed our numbers to grow will likely be done-in by our species in the not-too-distant future.

Humankind is a runaway project.  With a world population of more than 7.686 billion, we are a Malthusian plague species.  This is not a condemnation or indictment, nor some kind of ironic boast.  It is an observable fact.  The evidence is now overwhelming that we stand at a crossroads of history and of natural history, of nature and our own nature.  The fact that unfolding catastrophic change is literally in the air is undeniable.  But before we can devise solutions of mitigation, we have to admit that there is a problem.                

In light of the overwhelming corroboration—objective, tested and retested readings of atmospheric CO2 levels, the acidification of the oceans, the global dying-off of the world’s reefs, and the faster-than-anticipated melting of the polar and Greenland icecaps and subsequent rises in mean ocean levels—those who still argue that human-caused global climate change is not real must be regarded frankly as either stupid, cynical, irrational, ideologically deluded, willfully ignorant or distracted, pathologically stubborn, terminally greedy, or otherwise unreasonably wedded to a bad position in the face of demonstrable facts.  There are no other possibilities by which to characterize these people and, in practical terms, the difference between these overlapping categories is either nonexistent or trivial.  If this claim seems rude and in violation of The Elements of Style, then so be it.1  The time for civility and distracting “controversies” and “debates” is over, and I apologize in no way for the tone of this statement.  It benefits nobody to indulge cynical and delusional deniers as the taffrail of the Titanic lifts above the horizon.

Some commentators have equated climate deniers with those who deny the Holocaust and chattel slavery.  Although moral equations are always a tricky business, it is likely that the permanent damage humans are doing to the planet will far exceed that of the Nazis and slavers.  The question is the degree to which those of us who do not deny climate change but who contribute to it are as culpable as these odious historical categories.  Perhaps we are just the enablers—collaborators—and equivalent of those who knew of the crimes and who stood by and averted their eyes or else knowingly immersed themselves in the immediate demands and priorities of the private life.  No one except for the children, thrown unwittingly into this unfolding catastrophe, is innocent.

The debate about whether human activity has changed the global environment is over in any rational sense.  Human-caused climate change is real.  To deny this is to reveal oneself as being intellectually on the same plain as those who believe that the Earth is the flat center of the universe, or who deny that modern evolutionary theory contains greater and more accurate explanatory content than the archetypal myths of revealed religion and the teleological red herring of “Intelligent Design Theory.”  The remaining questions will be over the myriad of unknowable or partially or imperfectly knowable details of the unfolding chaos of the coming Eremocene (alternatively Anthropcene)2and the extent of what the changes and consequences will be, their severity, and whether or not they might still be reversed or mitigated, and how.  The initial question is simply whether or not it is already too late to turn things around.

We have already changed the planet’s atmospheric chemistry to a degree that is possibly irreparable.  In 2012 atmospheric CO2 levels at the North Pole exceeded 400 parts per million (up from the pre-industrial of around 290ppm).  At this writing carbon dioxide levels are around 415ppm.  This is not an opinion, but a measurable fact.  Carbon dioxide levels can be easily tested even by people who do not believe that human activity is altering the world’s environment.  Even if the production of all human-generated carbon was stopped today, the existing surfeit will last for a hundred thousand years or more if it is not actively mitigated.3  Much of the damage therefore is already done—the conditions for catastrophic change are locked in place—and we are now just waiting for the effects to manifest as carbon levels continue to rise unabated and with minor plateaus and fluctuations.

Increases in atmospheric carbon levels have result in an acidification of the oceans.  This too is an observable and quantifiable fact.  The fact that CO2 absorption by seawater results in its acidification and the fact that atmospheric carbon dioxide traps heat more effectively and to a greater extent than oxygen are now tenets of elementary school-level science and are in no way controversial assertions.  If you do not acknowledge both of these facts, then you do not really have an opinion on global climate change or its causes. 

As it is, the “climate debate”—polemics over the reality of global climate change—is not a scientific debate at all, but one of politics and political entertainment pitting testable/measurable observations against the dumb and uninformed denials of the true believers who evoke them or else the cynics who profit from carbon generation (the latter are reminiscent of the parable of the man who is paid a small fee to hang himself).4 Some general officers of the United States military are now on the record stating that climate change constitutes the greatest existing threat to our national security.5

Some deniers reply to the facts of climate change with anecdotal observations about the weather—locally colder or snowier than usual winters in a given region are a favorite distraction—with no heed given to the bigger picture (never mind the fact that the cold or snowy winters that North America has experienced since 2010 were caused by a dip in the jet stream caused by much warmer than usual air masses in Eurasia that threw the polar vortex off of its axis and down into the lower 48 states while at times Greenland basked in 50 degree sunshine). 

An effective retort to this kind of bold obtuseness is a simple and well-known analogy: the climate is like your personality and the weather is like your mood.  Just because you are sad for a day or two does not mean that you are a clinical depressive any more than a locally cold winter set in the midst of the two hottest decades ever recorded worldwide does not represent a global cooling trend.  Some places are likely to cool off as the planet’s overall mean temperature rises (the British Isles may get colder as the Gulf Stream is pushed further south by arctic melt water).  Of course human-generated carbon is only one prong of the global environmental crisis, and a symptom of existing imbalance.  

Human beings are also killing off of our fellow species at a rate that will soon surpass the Cretaceous die-off and is the sixth great mass extinction of the Earth’s natural history.6 This is a fact that is horrifying insofar as it can be quantified at all—the numbers here are softer and more conjectural than the precise measurements of chemistry and temperature and estimates may well be on the low side.  The true number of lost species will never be known as unidentified species are driven into extinction before they can be described and catalogued by science.7  But as a general statement, the shocking loss of biodiversity and habitat is uncontroversial in the communities that study such things seriously.  Human history has shown itself to be a brief and destructive branch of natural history in which we have become the locusts or something much, much worse than such seasonal visitations and imbalances. 

As a friend of mind observed, those who persist in their fool’s paradise or obstinate cynicism for short term gain and who still deny the reality global climate change must ultimately answer two questions: 1). What evidence would you accept that human are altering the global environment?  2). What if you are wrong in your denials? 

From my own experience, I have found that neither fact-based reason nor the resulting cognitive dissonance it instills change many minds once they are firmly fixed; rationalization and denial are the twin pillars of human psychology and it is a common and unfortunate characteristic of our species to double-down on mistaken beliefs rather than admit error and address problems forthrightly.  This may be our epitaph.

And now the book reviews.

John Gray: The “Rapacious Primate” and the Era of Solitude

Straw Dogs, Thoughts on Humans and other Animals, London: Granta, 2002 (paperback 2003), 246 pages.

Around 2007, a friend of mine recommended to me some books by the British philosopher and commentator, John Gray.  On issues of human meaning/non-meaning vis-à-vis the amorality of nature, Gray, an urbane lecturer, comes off in this book as a two-fisted scrapper, a dark realist who loves to mix things up and disabuse people of moral fictions and illusions.  Straw Dogs is not specifically on the world environmental crises, but rather on human nature. Ecological degradation obviously figures into his thesis prominently.

Straw Dogs is a rough-and-tumble polemic—Nietzsche-like in tone and format but Schopenhauer-like in its pessimism. It is a well-placed barrage against humanism in which the author, painting in broad strokes, characterizes his target as just another delusional faith, a secularized version of Christianity. Where Western religion promises eternal salvation, humanism, as characterized by Gray, asserts an equally unfounded faith in terrestrial transcendence: the myths of social progress, freedom of choice, and human exceptionality as a construct, an artificial distinction that “unnaturally” separates humans from the rest of the living world.  Even such austere commentators as Nietzsche (and presumably the existentialists that followed)—far from being nihilists—are in Gray’s assessment latter-day representatives of the Enlightenment, perhaps even Christianity in another guise, trying to keep the game of meaning and human uniqueness alive.                                                                                                                   

Gray begins this book with a flurry of unsettling assertions and observations.  In the preface to the paperback edition, he writes:

“Most people today think that they belong to a species that can be the master of its own destiny. This is faith, not science.  We do not speak of a time when whales and gorillas will be masters of their destinies. Why then humans?”

In other words, he believes that it is a human conceit to assume that we can take charge of our future any more than any other animal and that this assumption is based on an erroneous perception of human exceptionality by type from the rest of the natural world.  At the end of this section, he writes:

“Political action has become a surrogate for salvation; but no political project can deliver humanity from its natural condition.” 

Here then is a perspective, so conservative, so deterministic and fatalistic about workable solutions to the bigger problems of human nature as to dismiss them outright or to even entertain them as a possibilities.  This is not to say that he is wrong.

But it is really in the first few chapters that Gray brings out the big guns in explaining that not only can we not control our fate, but that we have through our very success as an animal, become a Juggernaut, a plague species that is inexorably laying waste to much of the living world around us.  Interestingly he does not lay this at the feet “of global capitalism, industrialization, ‘Western civilization’ or any flaw in human institutions.” Rather “It is a consequence of the evolutionary success of an exceptionally rapacious primate.  Throughout all of history and prehistory, human civilization has coincided with ecological destruction.”  Our trajectory is set by biological destiny rather than economic, political, social, flaws or technological excess. We are damned by the undirected natural process that created and shaped our species and are now returning the favor upon nature by destroying the biosphere.  

We destroy our environment then because of what we are (presumably industrial modernity is merely an accelerant or the apex manifestation of our identity as a destroyer).  We have by our very nature become the locusts, and destruction is part and parcel of who we are rather than a byproduct of a wrong turn somewhere back in our history.  Destruction and eventually self-destruction is in our blood, or more correctly, in the double helix spirals and the four-letter code of our DNA manifested in our extended phenotype.  The selfish gene and self-directed individual coupled with the altruism of group selection form a combination that will likely lead to self-destruction along with the destruction of the world as it was.

With the force of a gifted prosecutor presenting a strong case, and with all of the all of the subtlety of the proverbial bull in a china shop, Gray observes that we are killing off other species on a scale that will soon rival the Cretaceous die-off that wiped out the dinosaurs along with so much else of the planet’s flora and fauna 65 million years ago.  He points to early phases of human overkill and notes that most of the mega fauna of the last great ice age, animals like the woolly mammoth and rhinoceros, the cave bear, and saber tooth cats, North American camels, horses, lions, mastodons (about 75% of all the large animals of North America), and almost every large South American animal—not-so-long-gone creatures that are sometimes anachronistically lumped together with the dinosaurs and trilobites as distantly pre-human—were likely first wave casualties of modern human beings (there was a vestigial population of mammoths living on Wrangel Island until about 3,700 years ago, or about 800 years after the Pyramids of Giza were built).8  Quoting James Lovelock, Gray likens humans to a pathogen, a disease, a tumor, and indeed there is a literal resemblance between the light patterns of human settlement as seen from space and naturalistic patterns of metastasizing cancer.

Gray concedes “that a few traditional peoples love or lived in balance with the Earth for long periods,” that “the Inuit and Bushman stumbled into way of life in which their footprints were slight.  We cannot tread the Earth so lightly. Homo rapines has become to numerous.”  He continues:

“A human population of approaching 8 billion can only be maintained by desolating the Earth.  If wild habitat is given over to cultivation and habitation, if rain forests can be turned into a green desert, if genetic engineering enables ever-higher yields to be extorted from the thinning soils—then human will have created for themselves a new geological era, the Eremozoic, the Era of Solitude, in which little remains on the Earth but themselves and the prosthetic environment that keeps them alive.”

According to Gray then, wherever humans live on a scale of modern civilization (or any scale above the most benign of hunter-gatherers) there will be ecological degradation—that there is no way to have recognizable civilization without inflicting harm to the environment.  Similarly “green” politics and “sustainable” energy initiatives are also pleasant but misleading fictions—self-administered opiates and busy work to assuage progressives and Pollyannas beset with guilty consciences.  To Gray environmentalism is the sum of delusions masquerading as real solutions and high-mindedness. Gray clearly believes what he is saying and is not just trying to provide a much-needed shaking up of things by making the truth more clear than it really is.  Regardless, his position seems to be a development of the adage that given time and opportunity, people will screw up everything.

Gray’s dystopian future of a global human monoculture, his “green desert” or Eremozoic (“era of solitude”9)  finds parallel expression in the term Anthropocene, or the geological period characterized by the domination of human beings.  Adherents to this concept span a wide range from the very dark to the modestly optimistic to the insufferably arrogant to the insufferably idealistic.

Regardless of which term we use, Gray doesn’t think that things will ever get that far.  Sounding as if he is himself were beginning to embrace a historical narrative of his own, he writes that past a certain point, nature (understood as the Earth’s biosphere) will start to push back.  The idea is that the world human population will collapse to sustainable levels, just like an out-of-control worldwide plague of mice, lemmings, or locusts.  Like all plagues, human civilization embodies an imbalance in an otherwise more or less stable equilibrium and is therefore by its nature fundamentally unsustainable and eventually doomed (almost 20 years ago, with a population of about six billion, the human biomass was estimated to be more than 100 times greater than that of any other land animal that ever lived10).

There is of course an amoral “big picture” implication to all of this—a view of the natural world that, like nature itself, is beyond good and evil—which recognizes that sometimes large changes in natural history resulting from both gradual change and catastrophic collapse have in turn resulted in an entirely new phase of life rather than a return to something approximating the previous state of balance.  This would include the rise of photosynthesizing/carbon-trapping/oxygen-producing plants took over the world, fundamentally changing the atmospheric chemistry from what had existed before and therefore the course of life that followed.11 More on this in the discussion below on Adam Frank’s Light of the Stars..

Gray’s thesis appears to have elements of a Malthusian perspective and the Gaia hypothesis of James Lovelock and Lynn Margulis.  It is unclear how Gray can be so certain of the inevitability of such dire outcomes—that humans lack any kind of moderation and control and that nature will necessarily push back (could humankind, embracing a greater degree of self-control, be an agent of the Gaia balancing mechanism?).  Such certainty seems to go beyond a simple extrapolation of numbers and the subsequent acknowledgment of likely outcomes, into an actual deterministic historical narrative—an eschatological assertion like the ones he takes to task in his excellent 2007 book Black Mass.  My sense is that Gray will likely be right.

As a theory then, I believe that the flaw in Gray’s thesis lays in its deterministic inevitability, its necessity, its fatalism, when we do not even know whether the universe (or the biosphere as a subset) is deterministic or indeterministic.  We may very well kill off much of the natural world and ourselves with it, but this may have less to do with evolutionary programming or biological determinism than with inaction or bad or ineffective decisions in regard to the unprecedented problems that face us.  I also realize that if we fail, this will be the ultimate moot point in all of human history. 

The Gaia hypothesis (which is a real scientific theory) may turn out to be true. Perhaps nature will protect itself like a creature’s immune system by eradicating a majority of what William C. Bullitt called “a skin disease of the earth.”12  The problem is that this predictive aspect of the theory—really an organon or meta-theory—purports describe a phenomena that can not be tested (although the extinction or near-extinction of humankind would certainly corroborate it). On the other hand, the regulation of atmospheric gases by the biosphere is real and testable.13

Let me clarify the previous paragraph: if the Gaia hypothesis maintains that the Earth’s biosphere is self-regulating (e.g. maintaining atmospheric oxygen levels at a steady state in resisting the tendency in a non-living system toward a chemical equibrium), then this is a theory that can be accounted for by physics (e.g. James Lovejoy’s “Daisyworld” thought experiment) and is not teleology or metaphysics (See: Adam Frank, Light of the Stars, 129, see also Lynn Margulis, Symbiotic Planet, 113-128).  If we hypothesize that there are elements of the biosphere that will act like a creature’s immune system in eradicating the surplus human population, then we have possibly ventured into the realm of metaphysics.

As a practical matter, any successful, intelligent, willful animal that can eradicate its enemies and competitors and alter its environment (both intentionally and unintentionally) will run afoul of nature. Edward O. Wilson has expressed this idea.  But is this a tenet of common sense?  Logical necessity?  Biological or physical determinism?  And as a small subset of nature, is it even possible for us to know what “necessity” is for nature?  Are we condemned to extinction due to a lack of ability to adapt to changes increasingly of our own making, arising from our own nature? And is our extinction is made inevitable by a surfeit of adaptability and successful reproduction (i.e. the very qualities that allowed us to succeed)?  Does success at a certain level guarantee failure? Is balance possible in such a species?  What of balance and creatures whose numbers held in sustainable check in a steady state for tens, and in some cases hundreds of millions of years in relatively stable morphological form—the scorpion, shark, crocodile, and dragonfly—who live long enough to diversify slightly or change gradually along with conditions in the environment?  What of animals who have improved their odds (cats and dogs come to mind) through intelligence and a mutually beneficial partnership and co-evolution with humankind?

Gray says that we cannot control our fate, and yet our very success and perhaps our downfall is the result of being able to control so much of our environment (the eliminating or natural enemies from animal competitors to endemic diseases, to the regulation of human activity and production to guarantee water, food, energy, etc.).  Any animal that can eliminate or neutralize the counterbalances to its own numbers will result in imbalance, and unchecked imbalance leads to tipping points.14  It is ironic that Gray lays all of this at the feet of the human species as the inevitable product of our animal nature, as the result of biological and even moral inevitability, and yet I detect a tone of judgment about it all as if we are somehow to blame for who we are, for characteristics that Gray believes are intrinsic and unalterable. 

Gray, then, is a bleak post-humanist who apparently adheres to humanist values in his own life (indeed, as Camus knew, a view espousing a void of deontological values must lead either to humanism or nihilism, and nobody lives on a basis of nihilism).  In an interview given with Deborah Orr that appeared in the Independent he states that “[w]e’re not facing our problems.  We’ve got Prozac politics”—an odd claim given the supposed inevitability of those problems and the impossibility of fixing them. It is an odd statement for a behavioral determinist.  Moreover, although he powerfully criticizes the proposed solutions of others, his own solutions are vague and unlikely to remedy the situation (not that that is their purpose).15  When he writes on topics outside of his areas of fluencey (artificial consciousness, for instance), his ideas are not especially convincing.16

Of course in a literal biological sense Gray is right about a lot: humans are just another animal and to assert otherwise is to create an artificial distinction.  But even here, the demarcation between organic and artifice/synthetic (meaning the product of the human extended phenotype—a “natural category”) has to be further defined and is a useful distinction (“altered,” “manmade,” or “human-modified nature” may be a more constructive, if inelegant refinements of the “artificial” or “unnatural”).  After all are domesticated animals “natural,” are feral animals “wild” in conventional usage, and does calling everything “natural” add clarity to finer delineations?

Gray frames his discussion as an either/or dichotomy of the utopian illusion of progress versus inevitable apocalyptic collapse.  But what if the truth of the matter is not this cut-and-dried?  Perhaps we cannot be masters of our fate in an ultimate sense, but can we manage existing problems and new ones as they arise even from past solutions?  Although we have in past more modest instances, here the devil lays in both the scale and details, and the details may include a series of insurmountable hobbles and obstacles. 

In Gray we may not be far off from Roy Scranton’s prescription of acknowledging defeat, and personal decisions about learning to die in a global hospice, but we are not there yet.  The chances of redeeming the situation may be one in 100 or one in 1,000, but there is still a chance.  As a glorified simian—a “super monkey” in the words of Oliver Wendell Holmes, Jr.17(the flipside of Gray’s homo rapiens)—we are audacious creatures who must take that one chance, even if it turns out to be founded on delusions.  “If not gorillas and whales,” Gray asks “why then humans?”  Because we are natural-born problem solvers; because gorillas and whales have never put one of their own on the Moon. Why humans?  Because of the New Deal, the industrial mobilization during the Second World War, the Manhattan Project, Marshall Plan, and the Apollo Moon Project are items of the historical record and not matters of faith.

Far from seeing human civilization in terms of enlightened progress, we must come to regard it as managing ongoing damage control and the snuffing of fires as they spring up and then managing spinoff problems as they emerge from previous solutions—mitigating rather than just adapting or surrendering.  It will involve an unending series of brutal choices and a complete reorientation of the human relationship with nature and whose only appeal will be that they are preferable to our own extinction and inflicting irreparable damage on the world of which we are a part.

If Gray is simply making a non-deterministic Malthusian case that, unaltered, human population growth will likely result in a catastrophic collapse, we could accept this as a plausible and perhaps even a very likely hypothesis.  If on the other hand he is saying that the Earth is itself a living being and will necessarily push back against human metastasis through a sort of conscious awareness or physical law-like behavior, then the truth is yet to be seen.

What then is the practical distinction between deterministic inevitability of Gray’s (Lovelock/Margulis’s) Gaia model and the practical inevitability of a Malthusian model (Although Malthus himself hits at something very much like the Gaia thesis: he refers to famine as “the most dreadful resource of nature… The vices of mankind are active and able ministers of depopulation.  They are the precursors in the great army of destruction, and often finish the dreadful work themselves.  But should they fail in this war of extermination, sickly seasons, epidemic, pestilence, and plague, advance in terrible array, and sweep off their thousands and tens of thousands.  Should success still be incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow, levels the population with the food of the world” (Malthus, p. 61)?  The answer is that the later is inevitable only if conditions leading toward a collapse remain unaltered, and therefore allows for the possibility of a workable solution where the inevitable model does not.  As that greatest of Malthusian-antagonists-turned-Victorian-progressive-protagonist from English literature, Ebenezer Scrooge, in all of his Dickensian wordiness duns the Ghost of Christmas Present: 

“Spirit, answer me one question: are these the shadows of things that will be or the shadows of things that may be only?  Men’s actions determine certain ends if they persist in them.  But if their actions change, the ends change too.  Say it is so with what you show me… Why show me this if I am past all hope?”18 

In the words of another English writer also given to overwriting, “aye, there’s the rub.”  Perhaps it is not too late for humankind to change its ways, to regard writers on the environment to be latter day analogs of the ghosts of Christmas Present and Future.  It should be noted that under Malthus, there are survivors once the excess is eliminated.19                                                                                                                  

If Gray is right, some have argued that we might as well keep on polluting and degrading the environment, given that destruction flows from unalterable human nature and therefore self-extermination is inevitable.  Tiny Tim will go to an early grave no matter what changes and accommodations Scrooge makes in a closed universe.20  As Gray himself writes, “[p]olitical action has come to be a surrogate for salvation; but no political project can deliver humanity from its natural condition.”  Bah Humbug.

Of course whether the impending collapse of world civilization is deterministically certain or only merely certain in a practical or probabilistic sense is ultimately irrelevant, given that either way it will likely come to pass.  The question here is whether we will catastrophically implode as just another plague species, of if we are able to manage a controlled decline in population to sustainable steady state (and do the same with carbon even earlier).  It is the difference between an uncontrolled world of our own making and one in which we shape events piecemeal toward suitable incremental goals toward reaching a steady state.  It is the difference between a slight chance and no chance at all. 

Although I am not sold on the idea that biology is destiny—even though we can never untether ourselves from nature our or own nature, we can perhaps rise above our brute character with moderation and reason—I do agree that past a certain point, if we kill of the natural world, we will have killed ourselves in the process.  There will never be a human “post-natural” world.

One could argue that the audacity, hubris, and capacity for innovation that allowed us to take over the world are value-neutral qualities that could be reoriented toward curbing our own success.  One wonders what value Gray credits to human consciousness and of human ideas other than an admission that science and technology (notably medical and dental) progresses.  One senses that he sees our species as not worthy rather than as tragic.  

Darwinian success may lead to Malthusian catastrophe just as a human apocalypse could mean salvation for the rest of the living world. The over-success of the human species is the result of natural drives to survive, to improve our situation, and eliminate the competition (as well as an excellent blueprint—our genes—and out nature which is divided between the individual and the group.  See E.O. Wilson The Meaning of Human Existence).  More specifically, it is these powerful tools served us so well in making us the biological success we have become—and that survival is the conscious or unconscious goal of animals—then it is an artificial distinction to claim that we could not curtail this success with the same tools.

In the interest of full disclosure, I must say that I don’t share Gray’s apparent contempt for humanism or the Enlightenment.  His own ideas stand on the shoulders or in proximity to these ideas and trends or would otherwise not exist without them.  As a friend of mine observed, if we think of the natural world as a living organism (as Gray might), then, by way of analogy, human beings might be regarded as the most advanced, most conscious neurons of the brain of the creature.  The fact that we have become a runaway project does not make us bad (even if we accept Gray’s premise that humans destroy nature because of who we are, we can hardly be blamed for being who we are).  The fact that brain cells sometimes mutate into brain cancer hardly makes brain cells bad.21

One problem with writing about nature is that the living world is like a great Rorschach test into which we read or project our beliefs and philosophy al la mode into our observations and lessons drawn from it.  Emerson and Thoreau are mystics of a new-agey pantheism “as it exists in 1842.”  Malthus is a conservative economist and moralist wedged between the Enlightenment he helped to kill and the naturalism and modernity he helped usher in.  Darwin is a reluctant naturalist keenly aware of the importance of his great idea but shy of controversy and invective.  In Pilgrim at Tinker Creek, Annie Dillard is a perceptive and precociously odd woman-child who likes bugs and is endowed with a poet’s genius for the written word in reporting what she sees with such brute honesty that she overwhelms herself.22  Gray fluctuates from neo-Hobbesian realist to a Gaia fatalist, to a Schopenhauer-like pessimist.

To be fair, Straw Dogs is probably not Gray’s best book (see Black Mass, for instance).  In the end, there is something a little facile, a little shallow about the swagger, the pose he strikes here—the professional doom-and-gloomer on a soap box to frighten the fancy folk out of their smug orthodoxy.  Although there are few things more dangerous than a true believer, one comes away from Gray wondering if he believes all of his own ideas.  This is not to say that there are not powerful ideas here or that they are wrong. My gut feeling is that the book may one day be regarded as prophecy.   

Roy Scranton and Nietzsche’s Hospice

Learning to Die in the Anthropocene, City Lights Books, 2015, 142 pages.                                          

Another of the more eloquent voices on the dark side of the Anthropocene perspective is Roy Scranton.  A soldier and scholar who has glimpsed the ruined future of humankind in the rubble and misery of Iraq, Scranton believes that it is simply too late to save the environment.  The time for redemption has passed. Full stop. 

His response therefore, is one of acceptance and adaptation, that as members of a myth-making species, people should acknowledge that the world that we knew is finished and we should let it die with courage and dignity in the unfolding Anthropocene.  In this prescription he combines Nietzsche’s premise of living on one’s own terms with the Jungian preoccupation with myths.  In some respects, he is the opposite of Gray in that he embraces humanism and mythmaking and places much of the blame at the feet of capitalism rather than our animal nature.  (Scranton 2015, 23-24, Gray 2013, 112-118)                

I found that his two most revealing pieces on this topic are his hard-hitting article “We’re Doomed.  Now What?” and his book Learning to Die in the Anthropocene, both from 2015.

In some respects Scranton goes beyond Gray by asserting that things are already too far gone as a matter of fact, and that all that remains is to learn to let civilization die.  Scranton is a noble, disillusioned bon vivant of the mind forced by circumstances and his own clear and unflinching perception into fatalistic stoicism. 

In Learning to Die in the Anthropocene, a grimly elegant little book in which he builds his case, Scranton acknowledges the existence of the neoliberal Anthropocene recognizing its necessarily terminal nature.  But he is speaking about the death of the human world as we know it with a general idea about how to adapt, learn, survive, and pass on wisdom in the world after.

Scranton is not as elemental as Gray and his claim is not necessarily deterministic in character (i.e. that the looming end is the result of cosmic or genetic destiny or the natural balancing of the biosphere).  He simply observes that things are too far gone to be reversed.  Where Gray places blame squarely on the animal nature of homo rapinus—“an exceptionally rapacious primate”—and not on capitalism or Western civilization—Scranton puts much of the blame, both practical and moral, at the feet of carbon-fueled capitalism, “a zombie system, voracious and sterile” an “aggressive human monoculture [that has] proven astoundingly virulent but also toxic, cannibalistic and self-destructive.” (Gray 2002 (2003), 7, 151, 184; Scranton 2015, 23).  As with Edward O. Wilson before him, he calls for a “New Enlightenment.” (Scranton, 2015, 89-109; Wilson 2012, 287-297).

For all of his insight, Sctanton does not advance grandiose theories about human nature (most of his condemnation is of economics/consumerism and the realities of power although he does believe that “[t]he long record of human brutality seems to offer conclusive evidence that both individually and socially organized violence as biologically a part of human life as are sex, language, and eating” note).  He just looks at the world around him—peers Nietzsche-like into the unfolding abyss—and does not blink.  Honest, sensitive, and intelligent he simply tells the truth as he sees it.  He accepts the inevitable and without illusion or delusion.  The time for redemption has passed, and we must learn to let our world die with whatever gives us meaning.

As with Gray, Scranton may prove to be right as a practical matter and believes the end to be a matter of empirical fact rather than the unfolding of biological, historical, or metaphysical necessity.  He speaks about learning to die, but his book is only palliative in tone as regards capitalistic civilization.  He states that:

“The argument of this book is not that we have failed to prevent unmanageable global warming and that the global capitalist civilization as we know it is already over, but that humanity can survive and adapt to the new world of the Anthropocene if we accept human limits and transience as fundamental truths, and work to nurture the variety and richness of our collective cultural heritage.  Learning to die as individuals means letting go of our predispositions and fear.  Learning to die as a civilization means letting go of this particular way of life and its ideas of identity, freedom, success, and progress.  These two ways of learning to die come together in the role of the humanist thinker: the one who is willing to stop and ask troublesome questions, the one who is willing to interrupt, the one who resonates on other channels and with slower, deeper rhythms.” (Scranton 2015, 24)

He is speaking of the death of the world as we knew it and the individual lives we knew.  But he is also speaking of adapting and emerging in a time after with a universal humanism shorn of the assumptions of a failed world.  In this sense, he is telling us what to pack for after the storm, both for its own sake, and perhaps to learn from it and do better next time.  He writes:

“If being human is to mean anything at all in the Anthropocene, if we are going to refuse to let ourselves sink into the futility of life without memory, then we must not lose our few thousand years of hard-won knowledge accumulated at great cost and against great odds. We must not abandon the memory of the dead.” (Scranton 2015, 109)

In this sense Scranton is like a fifth century Irish monk carefully preserving civilization at the edge of the world, on the precipice of what might be the end of civilization, as well as an Old Testament prophet speaking of an eventual dawn after the dark of night, the calm or chaotic altered world after the tempest.  As with the early Irish monks and similar clerical scribes writing at the height of the Black Death of the 14th century, we do not know whether or not we face the end of the world. (Tuchman 1978, 92-125). 

Although I do not agree with the Anthropocene perspective of surrender and adaptation as long as there is a chance to avoid or mitigate a global disaster, there is much to like about Scranton’s perspective here.

In “We’re Doomed. Now What?” he goes even farther than the idea of the heroic humanist thinker and becomes something like Emerson’s all-perceiving eyeball, or a kind of pure empathetic consciousness.  Relying heavily on the perspectivism of Nietzsche, Scranton says that human meaning is a construct.  But meaning must be tied to—exist in—proximity to perceived reality, and beyond meaning is truth (Tarski 1956 (1983), 155).  Perspectivism is a kind of relativistic but intersubjective triangulation for a more complete (objective? Popper, OK) picture (Peirce).  From the accessing of truth, we may devise a more informed and less delusional kind of meaning.

He writes that rather than die with our provincial illusions intact,

“We need to learn to see with not just our Western eyes but with Islamic eyes and Inuit eyes, not just human eyes but with golden-cheeked warbler eyes, Coho salmon eyes, and polar bear eyes and not even with just eyes but with the wild, barely articulate being of clouds and seas and rocks and trees and stars.”

In other words, this is a kind of reverse-phenomenology: rather than begin without assumptions, we should begin with all perspectives.  As sympathetic as I am with all the living things he mentions, beyond a general empathy, to see things through their eyes is an impossibility.  I too feel a kind of pan-empathy, only without the illusion (a Western illusion) that I can truly see things as they do.  And beside, Scranton does not mention what good it would do even if it were possible.  His idea here is reminiscent of Edward O. Wilson’s notion of biophilia. (Wilson 1984). 

It seems odd that Scranton believes that technology cannot save us from the climate crisis, and yet empathy and philosophy will save us in a time after.  It may work for individuals—and certainly for thinking people, like historians—but it is not a realistic prescription for an overpopulated world in crisis.  Perhaps he would benefit from some of Gray’s realism about human nature.

Of course even without hope there are also good reasons to act with dignity in the face of inevitable demise.  This of course is a key tenet of the Hemingway world view: that in a world without intrinsic meaning, we can still come away with something if we face our fate with courage and dignity.  Nietzsche’s prescription is even better:  if we are to live our lives in an eternal sequence of cycles, then we should attempt to conduct our lives in such way so as to make them monuments to ourselves, to eternity, for eternity.  We do this by living in such away as would best reflect our noble nature.  Although modern physics has obviously cast doubt on the idea of eternal recurrence, the idea also holds up equally well in the block universe of Einstein (and Parmenides and Augustine) in which the past and future exist forever as a continuum on spite of the “stubbornly persistent illusion” of the present moment.  Our lives are our eternal monuments between fixed brackets, even in a dying world, and although Nietzsche and Einstein were both determinists, we must (paradoxically) act as if we have choice. 

Camus believes that in a world without deontological values, we assert our own and then try to live up to them knowing that we will fail.  A.J. Ayer inverts this with the idea that life provides its own meaning in a similar sense that our tastes choose us more than we choose them.  If Ayer is right, then perhaps we arrive back at determinism: we have no choice but to immerse ourselves in personal myths as they select us.  We have a will, but it is a part of who we are, and who we are is given.

Of course one could ask how are we to affirm what makes us distinctively human in a positive sense when that which characterizes us distinctly human as a plague species continues to strangle the biosphere?  What is meaning—aesthetic, intellectual or otherwise—in a dying world?  Do we withdraw into our myths, our archetypes as natural-born myth-makers or has this been a part of the problem all along?  

To this I would only add what might be called “The Parable of the Dying Beetle.”  When I was a child, I came across a beetle on the sidewalk that had been partially crushed when someone stepped on it.  It was still alive but dying.  I found a berry on a nearby bush and put it in front of the beetle’s mandibles and it began to eat the fruit.  There may have been no decision—eating something sweet and at hand was presumably something the beetle did as a matter of course.  It made no difference that there was no point in a dying beetle nourishing itself any more than did my offering it the berry to begin with.  It was simply something that the beetle did.  Perhaps it is the same with humans and myth-making: it is what we do, living or dying. After all, real writers write not to get published. They write because they are writers.

The Inner Worlds and Outer Abyss of Roy Scranton

We’re Doomed: Now What?, New York: Soho Press, Inc., 2018.

Scranton’s long awaited new book is a collection of essays, articles, reviews, and editorials.  It begins with a beefed-up version of his New York Times editorial “We’ re Doomed. Now What? [https://opinionator.blogs.nytimes.com/2015/12/21/were-doomed-now-what/] —which distills some of the themes of his earlier book, Learning to Die in the Anthropocene.  The new book is organized into four sections.  The first is on the unfolding climate catastrophe.  The second is on his experiences of the war, followed by “Violence and Communion” and “Last Thoughts.”  Given the fact that Scranton’s most conspicuous importance is as a writer—as a clear-sighted prophet of the environment—this arrangement makes sense, even though his vision of the future comes from his experience as a combat infantryman and his own sensitive and perceptive nature.

When Scranton limits himself to his own observations and experiences, he is powerful, poetic—the Jeremiah of his generation and possibly the last Cassandra of the Holocene, the world as it was.  He is a writer of true genius and a master storyteller of startling eloquence who writes multilayered prose with finesse and grace.  If there is any flaw, it may be a slight tendency toward overwriting, but this is an insignificant aesthetic consideration.  He also tends to assert more than reveal, but then he is not a novelist.

When he listens to his own muse or discusses other first-person commentators on war, he is magnificent.  When he references great philosophers, he is good—earnest but didactic, his interpretations more conventional.  When he references recent philosophers, especially postmodernists like Derrida, Foucault, and Heidegger, he is only slightly more tolerable than anybody else dropping these names and their shocking ideas (one can only hope that he has read some of Chomsky’s works on scientific language theory, but I digress).  I also take issue with some of his interpretations of Nietzsche, but these are the quibbles of a philosophy minor and the book is mostly outstanding and should be read.                                           

His writing on war is insightful both taken on its own and chronologically as a preface to his writing on the environment.  He is not only a keen observer who knows of what he speaks, he is completely fluent in the corpus of experience-based war literature.  If Scranton turns out to be wrong about the terminal nature of the environmental crises, his writing on war will likely endure as an important contribution to the canon in its own right.  In my library, his book will alternate between shelf space dedicated to the environment and somewhere in a neighborhood that includes Robert Graves, Wilfred Owen, Siegfried Sassoon, Ernst Junger, Vera Britton, James Jones, Eugene Sledge, and Paul Fussell.  The essays on war are reason enough to buy the book.  Certainly every Neocon, every Vulcan or humanitarian interventionist whose first solutions to geopolitical problems in important regions of the developing world is to drop bombs or send other people’s children into harm’s way should read all of Scranton’s war essays.

There is perhaps one substantial point of contention I have with this book, and I am still not sure how to resolve it, whether to reject my own criticism or to embrace it.  Scranton begins this collection with his powerful “We’re Doomed. Now What?” but ends it with an essay, “Raising a Daughter in a Ruined World,” that appeared in the New York Times around the same time that the new book was released during the summer of 2018.  Regardless of whether or not one agrees with its thesis, there is an uncompromising purity of vision in the earlier book and most of the essays of the new one.  

In the last essay, Scranton writes with his characteristic power, insight, and impossibly good prose.  But then he seems to pull a punch at the end.  Sure we’re screwed and there is little reason for hope, but here the nature of the doomsday scenario is a little less clear, less definitive: does the near future hold the extinction of our species along with so many others, or is just some kind of transformation?  Is the world merely ruined or about to be destroyed?  To be fair, nobody knows how bad things will be beyond the tipping point (and there are parts of his earlier book which also suggest transformation).  If he begins the new book with a knockout hook, he seems to end it with a feint that, while not exactly optimism, is something less than certain death—a vague investment in hope with real consequences.

I get it: kids force compromises and force hope along with worry and his intellectual compromise (tap dance?) may be that there is a glimmer of hope.  Even though the abyss looks into you when you look into it, most of us would blink at least once, even in a world that may (or may not) be dying.

He rightly asks “[w]hy would anyone choose to bring new life into this world?” and then spends part of the essay rationalizing an answer that is very much in keeping with the theme of the myths of personal meaning he prescribes in Learning to Die in the Anthropocene.  Kids force hope, but who forced, or at least permitted the child’s existence to begin with?  It is none of my business, except that Scranton is a public commentator who brought up the point publicly and then attempt to explain.  The problem is that the new creature did not ask to be a part of someone’s palliative prescription.  For while there are many shades of realism, one cannot be half a fatalist any more than one can be half a utopian.  Or as a friend of mine observed, “[T]he problem with taking responsibility for bringing a child into the world is that it precludes rational pessimism.”  

The more general problem is that this acknowledgment of possible hope forces him from a less compromising position in his earlier book and most of his articles in the new one to conclude with a somewhat more conventional and less interesting Anthropocene position—one that admits that the world is ruined (i.e. too far gone to be saved through robust mitigation), and so rather than try to reverse the damage we must adapt.  In reviewing his previous book, I noted that a fatalistic point of view risks premature surrender, but here my criticism is more with his newfound rationale for solutions than with his all-too-human flinch per se.  

Learning to Dies in the Anthropocene gives us a basis for a personal approach to the world’s end; in “Raising a Child in a Doomed World,” (https://www.nytimes.com/2018/07/16/opinion/climate-change-parenting.html) Scranton states that individual solutions—other than suicide on a mass scale (although one can only wonder what kind of greenhouse gases billions of decomposing corpses would produce)—cannot be a part of the solution in terms of fixing the problem.  Even with the possibility of premature surrender, the earlier, more personalized perspective is more interesting than the new one with non-forthcoming large scale prescriptions.  He throws out a few of the solutions common to the young (global bottom up egalitarian, global socialism), but has no illusions about the feasibility of these.

Even here there is honesty; he does not pretend to know how to fix things.  And so (during an August 8, 2018 reading and book signing at Politics & Prose in Washington, D.C.), he lapses into generalities when questioned: “organize locally and aggressively,” perhaps there will be a world socialist revolution (which he openly concedes is utopian, the realm of “fantasy,” yet at another point states that it “now seems possible”), do less and slow down (although in the last essay, he states that personal approaches can’t work), and learn to die (getting back to his previous theme).  

A couple of other minor points: the book’s title seems a bit too stark and spot-on for such a serious collection and is more in keeping with the placard of the archetypal street corner prophet of New Yorker cartoons.  Similarly, the cover illustration—the Midtown Manhattan skyline awash behind an angry sea—struck me as being a little tabloidesque, but what it is they say about judging a book by its cover?

Jedediah Purdy and the Democratic Anthropocene

After Nature, A Politics for the Anthropocene, Harvard University Press, 2015, 326 pages.   

Another of the more articulate voices under the umbrella of Anthropocene perspectives is Jedediah Purdy, now a professor of law at Columbia University Law School after 15 years at Duke.  Purdy is a prolific writer and this book—now four years old—is by no means his most recent statement on the environment (for an example of his more recent writing, see https://www.nytimes.com/2019/02/14/opinion/green-new-deal-ocasio-cortez-.html).

After Nature is a wonder and a curiosity.  In the first six chapters he provides an intellectual history of nature and the American mind that is nothing short of brilliant.  His writing and effortless erudition are exceptional.  He is a truly impressive scholar.  This part of his book is intellectual history at its best. 

Purdy’s approach is to use the law as a reflection of attitudes toward the natural world.  Through a legal-political lens, he devises the successive historical-intellectual categories of the providential, romantic, utilitarian, and ecological, interpreting nature as the wilderness/the garden, pantheistic god, natural resources, and a living life support system to be tamed, admired, worshiped, managed, and preserved. 

These interpretive frames in turn characterize or “define an era of political action and lawmaking that left its mark on the vast landscapes.”  On page 27, he states that these visions are both successive and cumulative, that “[t]hey all coexist in those landscapes, in political constituencies, and laws, and in the factious identities of environmental politics and everyday life.”  He acknowledges that all of these perspectives exist in his own sensibilities.  In my experience, one is unlikely to come across better fluency, depth of understanding, and quality of writing on this topic anywhere, and one is tempted to call it a masterwork of its kind.

It is therefore all the more surprising that after such penetrating analysis, historical insight, and eloquence in describing trends of the past, his prescription for addressing the environmental problems of the present and future would go so hard off the rails into a tangle of unclear writing and a morass of generalities and unrealistic remedies.  It also strikes one as odd that such a powerful and liberal-minded commentator would embrace his particular spin on the Anthropocene perspective, given some of its implications.

In Chapter 7 “Environmental Law in the Anthropocene,” Purdy introduces some interesting, if not completely original ideas like “uncanniness”—the interface with other sentient animals without ever knowing the mystery of what lies behind it, of what they feel and think.  Before this, he discusses something calls the “environmental imagination”—an amalgam of power (“material”) interests and values.  After this he ventures into more problematic territory in his sub-chapter “Climate Change: From Failure to New Standards of Success.”

Purdy rejects the claims of unnamed others that climate change can be “solved” or “prevented” (these are his cautionary quotation marks, although it is unclear who he is quoting).   He writes about the “implicit ideas” of unidentified “scholars and commentators” (my quotation marks around his ideas) and their “predictable response” of geo-engineering to rapidly mounting atmospheric carbon levels (“a catch-all term for technologies that do not reduce emissions but instead directly adjust global warming”).  Again, I am not sure to whom he is referring here. Most people I know who follow environmental issues favor a variety of approaches to include the production and reduction of carbon production.

According to Purdy, this perspective begins with “pessimism” and the observation that “we are rationally incapable of collective self-restraint.”  This is reasonable enough, and Purdy recognizes that spontaneous self-restraint on a global scale has not been forthcoming.  Indeed it is hard to imagine how such collective action would manifest itself on such a massive scale short of a conspicuous crisis of a magnitude that would likely signal the catastrophic end of things as we know them (e.g. if we woke up one day and most of the coastal cities of the world were under a foot of water).  If this kind of awareness of a crisis was possible at a point where it was not too late to mitigate the crises, it could only be harnessed through the top-down efforts of states acting in concert.

With self-restraint not materializing, the “pessimism” of the environmental straw man switches to “hubris.” And both of these descriptive nouns then “take comfort” (just like actual people or groups of people in a debate) in an either/or conclusion “that if we fail to ‘prevent’ climate change or ‘save’ the planet from it then all bets are off; we have failed, the game is up.  This threat of failure and apocalypse then results in the “next step” of ‘try anything now!’ attitude of geo-engineering.” 

From here he concludes that “[b]oth attitudes manage to avoid the thought [idea] that collective self-restraint should be a part of our response, perhaps including refraining from geo-engineering: the pessimism avoids that thought by demonstrating, or assuming, that self-restraint would be irrational and therefore must be impossible; and the hubris avoids it by announcing that self-restraint has failed (as it had to fail ‘rationally’ speaking), it was unnecessary all along anyway.”

Purdy then “propose[s] a different way of looking at it” and calmly announces that “climate change, so far, has outrun the human capacity for self-restraint” [so, the attitude of “hubris” is right then?], it is too late to save the nature as it was (“climate change has begun to overwhelm the very idea that there is a ‘nature’ to be preserved”), that we should learn to adapt.”  In the next paragraph, he states “[w]e need new standards for shaping, managing, and living well.  Familiar standards of environmental failure will not reliably serve anymore [does he mean metrics of temperature, atmospheric and ocean chemistry, and loss of habitat/biodiversity?] .  We should ask, of efforts to address climate change, not just whether they are likely to ‘succeed’ at solving the problem, but whether they are promising experiments—workable approaches to valuing a world that that we have everywhere changed.”

For a moment then, there is a glimmer that Purdy might be on to something by embracing a Popper-like outlook of experimentation and piecemeal problem solving/engineering.  The question is how to implement an approach of bold experimentation. 

My own view is that on balance, the environmentalist of recent decades have been clear-sighted in their observations and that their “pessimism” is warranted.  As with Malthus and the inexorable tables of population growth, I would contend that they are right except perhaps for their timetable.  Is the dying-off of the world’s reefs and the collapse of amphibian and now insect populations all just the pessimism and hubris of fatalistic imaginings?

How then should we proceed?  Even with the implosion of the The End of History narrative, Purdy, like so many of his generation and the younger Millennials, seems to have a child’s faith in the curative powers of democracy.  His concurrence with Nobel laureate, Amartya Sens’s, famous observation that famine has never visited a democracy appears to be as much of an uncritical Fukyama-esque cliché as the assertion that democracies do not fight each other (malnutrition on an impressive scale has in fact occurred in Bangladesh and in the Indian states of Orissa and Rajasthan—i.e. regions within a democratic system). 

Purdy then asserts a kind of democratic or good globalization in contrast to the predatory, neoliberal variety that he rightful identifies as a leading accelerant of the global environmental catastrophe.  He writes that “[p]olitics will determine the shape of the Anthropocene.”  Perhaps, but what does “democracy” mean to the millions living on trash heaps in the poorer nations of the world?  What does it mean in places like Burma, the Congo, and Libya? 

A savant of intellectual history, Purdy seems to know everything about the law and political history as a reflection of American sensibilities.  But politics and the law (like economics and the military) are avenues and manifestations of power—even when generous and high-minded, the law is about power—and one is left wondering if Purdy knows how power really works.  

In the tradition of Karl Popper’s The Open Society and its Enemies, I would contend that the primary benefits democracy (meaning the representative democracy of liberal republics), are practical, almost consequentialist in nature, rather than moral. First, it is an effective means of removing bad or ineffective leaders and a means of promoting “reform without violence;”27 Second, it should ideally provide a choice in which a voter can discern a clearly preferable option given their interests, outlook, and understanding.

The idea of a benevolent democratic genera of globalization and a “democratic Anthropocene” is reminiscent of academic Marxians of a few decades ago who waited for the “real” or “true” Marxism to kick-in somewhere in the world while either shrugging off its real-world manifestations in the Soviet Union and the Eastern Bloc, China, Cuba, and North Korea as false examples, corrupt excrescences, or else acknowledging them as hijacked monstrosities.

Whether in support of Marxism or democracy, this kind of ideological stance allows those who wield such arguments to immunize or insulate their position from criticism rather than constructively welcoming it, inviting it.  It could be argued that concepts of egalitarian democratic or socialistic globalization is to the current generation what Marxist socialism was to American idealists of a century ago.  In the early twentieth-century, majority of Americans had the realism and good sense not to accept the eschatological vision and prescriptions of the earlier trend.  As numerous writers have noted, populism is just as likely to take on a reactive character as it is a high-minded progressive ideology.  As economist Robert Kuttner and others have observed, some of the European nations whose elections were won by populist candidates can be described as “illiberal democracies.” [See Robert Kuttner, Can Democracy Survive Global Capitalism?, 267].

The fact that some of the most brilliant young commentators on the environment, like Purdy and perhaps Scranton (even with his admission that global socialism is possibly utopian)—to say nothing of veteran commentators on the political scene, like Chris Hedges (America the Farewell Tour)—embrace such shockingly unrealistic approaches, leaves one with a sense of despair over the proposed solutions as great as that with the crises themselves.  It is like pulling a ripcord after jumping out of an aircraft only to find that one’s parachute has been replaced with laundry. 

To be fair, nobody has a solution.  Edward O. Wilson has lamented that humans have not evolved to the point where we can see the people of the world as a single community.  Even such a world-historical intellect as Albert Einstein advocated a single world government. [See Albert Einstein “Atomic War or Peace” in Out of My Later Years, 185-199].  If the proliferation of nuclear weapons and the possibility of the violent destruction of the world could not force global unity as a reality, what chance do the environmental crises have?  By the end of 1945, everybody believed that the atomic bomb existed while today, powerful interests continue to deny the realty of the climate crises. As George Kennan observes, the world will never be ruled by a single regime (even the possibility that it will be ruled entirely under one kind of system seems highly unlikely).  Unfortunately, he will probably be right.

Purdy rightfully despises the neoliberal Anthropocene wrought by economic globalization.  But perhaps this is the true nature of globalization: aggressive, expansionistic, greed-driven, blind to or uncaring of its own excesses, and de facto imperialistic in character.  William T. Sherman famously observes that “[w]ar is cruelty, and you cannot refine it.”28  So it is with globalization, whether it be mercantilist, imperialist, neoliberal, or some untested new variety.

Globalization is economic imperialism and it likely cannot be reformed.  The whole point of off-shoring industry and labor arbitrage is to make as big of a profit as possible by spending as little money as possible in countries with no tax burdens and few, if any, labor and environmental laws, and people willing to work for almost nothing.  Globalization is the exploitation of new markets to minimize costs and maximize profits. While the purpose of an economy under a social democratic model is to provide as much employment as possible, neoliberal globalization seeks a system of efficiency that streamlines the upward flow of wealth from the wage slaves to the one percent.  

It is conceivable that someday in the distant future the world will fall into an interlinked global order based on naturalistic economic production regions and shifting import-shifting cities, as described by Jane Jacobs.  But that day, if it ever comes, is both far off and increasingly unlikely and there exists no roadmap of how to get there.29  Certainly a sustainable, steady-state world might have to be more egalitarian than it is today as a part of fundamentally re-conceptualizing the human relationship with nature. But this too is a long way down the road and would have to be imposed by changing circumstances forced by the environment.  We need solutions now, and the clock is ticking.  

For the short term—for the initial steps in a long journey—the best we can hope for is modest and tenuous cooperation among sovereign states to address the big issues facing us: a shotgun marriage forced by circumstances, by intolerable alternatives (an historical analogy might be the U.S.-Soviet alliance in the Second World War, and the effort will have to be like a World War II mobilization only on a vastly larger scale).  We will need states to enforce change locally and international agreements will have to establish what the laws will be.  The problem here is the internal social and political divisions within states that are unlikely to be resolved.  Moreover, immediate local interests will always take priority over what will likely be seen as abstract worldwide issues.  In order to prevent such internal dissent and tribalism, and building on Jacobs’ idea—an ideal world order would have to consist of small regional states that are demographically homogenous (another idea of David Isenbergh).    

Purdy rightfully disdains the disparities of neoliberal globalization but only offers an ill-defined program in which “the fates of rich and poor, rulers and rules” would be tied (presumably the ruling classes would allow the ruled to vote away their power).  The idea here is that famine is not the result of scarcity but rather of distribution. 

If such control and reconfiguration is already possible, then why have even more modest remedies failed to date?  Why not put in place the sensible prescriptions of the environmentalists who embody the “pessimism” and “hubris”? Why stop there?  Why not banish war and bring forth a workers’ paradise? Why not Edward O. Wilson’s Half-Planet goal (see below)?

As regards practicalities of democratic globalization, Purdy’s prescriptions also seem to ignore some inconvenient historical facts.  For instance, as many commentators have observed, the larger and more diverse a population becomes, the less governable it becomes and certainly the less democratic as individual identities and rights are subordinated to the group.  The idea of a progressive social democracy with a very large and diverse population seems unlikely to the point of being a nonstarter.30    

Democracy works best on a local level where people are intimately acquainted with the issues and how they affect their interests—the New England town hall meeting being the archetype for local democracy in this country.  Similarly, the most successful democratic nations have tended to be small countries with small and homogenous populations.  Trying to generalize this model to a burgeoning and increasingly desperate world any time soon is a pipedream.

Ultimately, the problem with the prescription of universal democracy in a technical sense is that democracy, like economies, are naturalistic historical features and are not a-contextual constructs to be cut out and laid down like carpet where and when they are needed.  Democracy must grow from within a cultural/historical framework. It cannot effectively be imposed any more than can a healthy economy.  As Justice Holmes observes in a letter to Harold Laski, “[o]ne can change institution by a fiat but populations only by slow degrees and I don’t believe in millennia.” 

Purdy also seems to conflate democracy with an ethos of liberalism.  Democracy is a form of government by majority rule where liberalism is an outlook based on certain sensibilities.  If a fundamentalist Islamic nation gives its people the franchise—or if a majority of people in an established republic adopt an ideology of far right populism—they will likely not vote for candidates who espouse their own values and interests. Transplanted world democracy and the redistribution of wealth are not likely to work even if the means to implement them existed.

As for the democratic Anthropocene—or any kind of Anthropocene world order—I think that John Gray gets it mostly right, that things will never get that far.  In order to understand the impracticality of this idea, we might consider a simple thought experiment in which we substitute another animal for ourselves.  It is difficult to imagine a living world reduced to a monoculture of a single species of ant or termite, for instance.31  And while humans, like ants (e.g. leafcutters), may utilize the various resources of a robust environment of which we are but but a small subset, it is difficult to imagine nature surviving as a self-supporting system in a reduced state as the symbiotic garden (Gray’s “green desert”) along the periphery of an ant monoculture.  And so we ask: if not ants, then why humans?

In terms of Boolean logic, the reduction of nature to a kept garden—and I am not saying that Purdy goes this far—appears to be an attempt to put a larger category (nature) inside of a smaller one (human civilization), the equivalent of attempting to draw a Venn diagram with a larger circle inside of a smaller one.

Beyond the lack of realism there is also an unrealized immorality to the more extreme Anthropocene points of view.  Letting nature and the possibility of its salvation be lost is a kind of abdication that is not only monumentally arrogant but also ethically monstrous and on a scale far greater than historical categories like slavery or even the worse instances of genocide.  One can only wonder if adherents to the Anthropocene perspectives realize the implications of their prescriptions. 

We now know that the living world is far more conscious, thinking, feeling, more interconnected than we ever before suspected.32 Even the individual cells of our bodies appear to possess a Lamarckian-like interactive intelligence of their own, and we can only begin to guess at the complexities of the overlapping systems of the world biosphere.33  There is no possibly way we can know the implications of lost interrelation of whole strata and ecosystems.  To think that we can manage a vastly reduced portion of the living world to suit our needs is as unethical as it is impractical.  

To give up and say that the world is already wrecked is not the same things as saying that some abstract or hypothetical set or singular category will be lost, but rather that a large part of the sentient world will be destroyed by of us.  To put it more bluntly, how can allowing nature be destroyed—meaning the extinction of perhaps a million or more species and trillions of individual organisms—without attempting the largest possible effort to prevent it, be any less of an atrocity than the Holocaust or slavery?  In an objective biological calculus of biodiversity, it will be many fold worse, even if the ecological declines occurs over a period of lulling gradual change, of terraces of change and plateaus, and human adaption.  A child who has never seen a snowy winter day, snowy egret, or a snow leopard will not miss them any more than a child today misses a Carolina parakeet or Labrador duck.  At worst they will experience a vague sadness for something they never knew, assuming they are even taught about such lost things.  

I mention this (and again, I am not saying that Purdy advocates such a position) because I would like to think that those who subscribe to the Anthropocene perspectives would have willingly fought in WWII, especially if they had been aware of the atrocities of the Nazis and Imperial Japanese.  And yet in a mere two sentences, the author seems to decree an unspecified portion of the living and sentient world to be permanently lost:

“As greenhouse-gas levels rise and the earth’s systems shift, climate change has begun to overwhelm the idea that there is a “nature” to be saved or preserved.  If success means keeping things as they are, we have already failed, probably irrevocably.”

No “nature to be preserved”?  What could this possibly mean?  Could the author mean it literally, that that the living world (to include humans) is lost?  Could he mean “nature” as metaphor (whatever that means)?  As a defunct concept or “construct” of the kind that posmodernists love to contend as half of a false dichotomy?  Are environments like rainforests and reefs metaphors and human constructs?  Since this is a work of nonfiction, I will take him at his literal word, but readily concede that I might be misunderstanding this and other points of his.

And the solution:

“We need new standards for shaping, managing, and living well in a transformed world.”

“Living well,” huh? What could this mean in a world soon to have 8 billion mouths to feed (Scranton, by contrast, tells us that we must learn to die well)?  How is this not Anthropocentrism?  Observe the logic here: when the alternatives are likely failure and unlikely success don’t even try to correct the problem or fix your style of play, simply change the standards and hope for the best.  Move the goalposts to the suit the game you intend to play.  When reality becomes unacceptable, just diminish your expectations and change the parameters of the discussion.  When the Wehrmact overruns Poland, France, and the Low Countries, just write off these areas as newly acquired German provinces and then do business with the new overlords.  After all, solutions have not been forthcoming to date.  He is right that things look beak for the world, but then things looked pretty bleak in 1939 and 1940.

My sense is that beyond the brilliance and kindly nature, there is a desperation behind the outlook.  In his book Purdy asserts the stern banality that “nature will never love us as we love it” as if that was somehow related to the issue, as if to chastise naïve tree huggers with the fact that their embrace is unrequited.  But one gets the sense that he might just as easily be chiding a younger, Thoreau-like Jed Purdy over a lost love that never loved him back.  If an intelligent realization of the amorality of nature has forced him to relinquish the mistaken idea of a beloved and loving nature, perhaps he cannot let go of the universalist ideals of liberal democracy, even above the survival of much of the natural world itself.  A person must believe in something, and it is easier to accept the death of something that never loved us in return.  If we do not hold on to something, what then remains of belief, youthful optimism and of hope for the future beyond youth? Of course the desire for something to be true has no bearing on its actual truth.

What Purdy offers is a liberal humanist “riposte” to the undeniable biological logic of the post-humanist progressives who would extend rights to the non-human world.  Purdy makes an impressive case to preserve liberal humanism, a wholesome human tie to the land, and the dignity (if not actual rights) of animals.   

As intellectual history, After Nature is impressive and besides minor infractions against the language no more serious than a modest penchant for words like “paradigmatic,” much of it is remarkably well-written.  But ultimately the importance of a book is found in the power of its ideas—its insights—rather than in the power of its presentation.  For all of its brilliance, After Nature ends up embracing hopeful speculative generalities that one may infer to be intended as superior and ahead of the pack while seeming to write-off much of the living world.  In his prescriptions he is provincial in his generational ideas—ideas full of historical analysis but shorn of real historically-based policy judgment, ideas which by his own admission will not preserve nature, which he deems a defunct concept and reality. 

A great analyst may fail as a practical policy planner and the stark contrast of this book as legal and political history relative to its prescriptions suggests that this is the case.  Just because you are smart doesn’t mean you are sensible in every case, and just because you write well doesn’t mean you are right.  Great eloquence runs the risk of self-seduction along with the seduction of others; many legal cases are won by the persuasion of presentation rather than on the proximity of the claims of the winning argument to the truth of the matter.  Purdy clearly knows history, but in my opinion, he does not apply his remarkable interpretation of the past toward a realistic end.  As with some lawyers-turned-historians I have known, he seems to overestimate the power and influence of the law and political form (e.g. it was not the Confiscation Acts nor, strictly speaking, the Emancipation Proclamation that destroyed slavery, but rather the Union Army; where the law is not enforced, the law ceases exist as a practical matter), to include those of “democracy” on the course of human events.

Purdy does not face the human fate that Scranton characterizes in Learning How to Die in the Anthropocene.  This is understandable.  What is standalone brilliance and ambition in a dying world?  If Scranton is sensitive and intelligent, then Purdy is too. Perhaps more so, and presumably he has not seen Iraq.

The Grand Old Man of Biology and His Half-Earth

Half-Earth, Our Planet’s fight for Life, New York: W.W. Norton & Company, 2016, 259 pages, $25.95 (hardcover)

The human species is, in a word, an environmental hazard.  It is possible that intelligence in the wrong kind of species was foreordained to be a fatal combination for the biosphere.  Perhaps a law of evolution is that intelligence extinguishes itself.

-Edward O. Wilson

This admittedly dour scenario is based on what can be termed the juggernaut theory of human nature, which holds that people are programmed by their genetic heritage to be so selfish that a sense of global responsibility will come too late.

-Edward O. Wilson

Darwin’s dice have rolled poorly for Earth.

-Edward O. Wilson

In contrast to the three authors I have discussed so far, Edward O. Wilson is an actual scientist.  As one might expect, he is non-judgmental but equally damning his measured observations of the devastation wrought by our kind.34  He is genial and understanding of human flaws, fears, and the will to believe, but retains few illusions an in some ways his analysis is as dire as Gray’s (Wilson coined the term Eremozoic/Eremozcene, the “Era of Solitude”—which he prefers to Anthropocene).35 Unlike the others, Wilson tells us what must we must do to save the planet.  He does not tell us how.

What sets him apart from the others is that he is a world-class biologist, the world authority on ants, and one of the founders of modern sociobiology.  He is intimately acquainted with the problem and has an understanding of how natural systems work that is both broad and deep.  As regards his writing, he is gentle—a good sport by temperament—and has sympathy with people and the human condition with all of its quirks and many faults.  It is striking that this gentleness does not diminish or water down his observations.

Wilson has written a great deal—including 9 books over the age of 80—and has apparently changed his mind on some important issues over the years.  He believes that humans cannot act beyond the natural imperatives that shaped us as creatures, but he does believe that we can learn and change our minds.  It is therefore noteworthy and not a little ironic that John Gray believes that our behavior is inevitable, yet one senses a tone of judgment, while Wilson believes that we may have a choice in what we are doing, and yet is forgiving, even sympathetically coaxing.

In his 2016 book, Half-Earth, Wilson, offers as a solution—a goal rather than a means of achieving it—with the same hemispheric name, a thesis stating that, insofar as possible, in order to save the biosphere, it is necessary to preserve as much of the world’s biodiversity as possible.  To do this, he believes that we must preserve half of the world’s land surface as undisturbed, self-directed habitat.

In a book note in the March 6, 2016 edition of The New Republic titled “A Wild Way to Save the Planet” [https://newrepublic.com/article/130791/wild-way-save-planet], Professor Purdy reviewed Wilson’s book with some prescience and little charity.  Purdy raises some interesting points and is correct that Wilson does not offer a practical step-by-step program or a roadmap toward this goal.  He is also right that Wilson is not at his best when speculating on the natural adaptive purpose of the free market or on population projections and that he betrays a certain political naïveté, but then his importance is not as a social engineer or a practitioner of practical politics. He is a leader of the biodiversity movement and a foundation dedicated to this bears his name.  He is also a Cassandra with the most impressive of credentials relative to his topic.  In terms of contributions and historical reputation, Wilson, who will be 90 next month (June 2019), is the most distinguished of the five commentators discussed here.

In his analysis, and after a grudging if mostly accurate overview of Wilson’s positions and accomplishments,36 Purdy seems to miss the significant of Wilson’s book as a poetic (as opposed to purely analytical) thesis: if we want to save the planet and ourselves, we must preserve the world’s biodiversity and the unfathomable complexity of symbiosis and interconnection of the living world.  If we want to save Nature thus construed, we must dedicate about half of the planet to just leaving it alone (indeed, a plausible argument can be made that, other than setting aside wild areas, the degree to which humankind meddles with nature—even with good intentions—the more harm we do).

Although niches of individual species lost may be quickly filled in an otherwise rich environment, we cannot begin to imagine the implications of the structural damage we do to the overall ecosphere through wholesale destruction of habitat and species.  There may be impossibly complex, butterfly theory-like ripples leading to unforeseen ends.  Damage to the environment is often disproportionate to what we think it might be.37  Nor should we concede that the natural world is hopelessly lost already (in stating that “[i]f success means keeping things as they are, we have already failed, probably irrevocably” Purdy reveals himself to be darker than the “pessimists” who still seek mitigation), and that the goal of some writers on the Anthropocene may be little more than managing what remains of nature.  In contrast, Wilson is not making a “wild” suggestion.  He is telling us what we must do to save the biosphere and ourselves with it.  In this assessment I believe he is correct.

Wilson sees the Anthropocene outlooks and their monocultural goal as pernicious anthropocentrism—a Trojan horse of human arrogance cloaked in the language of stern environmental realism.  He believes that they prescribe a greatly reduced human-nature symbiosis with humans as the senior partner.  Purdy dismisses this assertion in a few clipped assertions with a confidence that underlies so much of his analyses here and elsewhere.  But Wilson’s experience with both the Nature Conservancy and in the academy and statements by the people he cites bears out his beliefs (to be fair, there are degrees of the Anthropocene perspective ranging from the comparatively mild to the extreme). 

Regardless, Purdy does not speak for all Anthropocene points of view—more extreme adherents do in fact couch their positions in terms of a stark and dismissive pseudo-realism that are arrogant.  Purdy seems to concede the danger of “a naturalized version of post-natural human mastery” in his own book (pp. 45-46).  As for the prescriptions of the Anthropocene perspective Wilson criticizes in Half-Earth, it would seem that they are no more realistic than those of a cancer patient who acknowledges his disease but not its terminal nature, or else realizes its seriousness and then adopts a cure that will allow the disease to kill him.  Purdy asserts that Wilson’s goal is itself a reflection of just another Anthropocene outlook.

Does Wilson’s book posit an Anthropocene thesis?  Adherents to the Anthropocene define it variously as the state of affairs where nature has been irreparably damaged or altered by the activities of mankind, and as the dominant species we are thrust into the position of dealing with it one way or another.

Purdy characterizes the Anthropocene as a current that “is marked by increased human interference and diminished human control, all at once, setting free or amplifying destructive forces that put us in the position of destructive apprentices without a master sorcerer.  In this respect, the Anthropocene is not exactly an achievement; it is more nearly a condition that has fallen clattering around our heads.”38 

This is fair enough.  But it is not so much an acknowledgement of the Anthropocene as a fact or a state of affairs that concerns our analysis of Wilson’s outlook (or the term we use to describe it) so much as whether or not his view is an Anthropocene perspective like the ones he criticizes in Half-Earth, and with which Purdy at least in part concurs with in After Nature (i.e. one that has accepted the ruin of the biosphere and which prescribes adaptation over mitigation).

Lawyers quibble over definitions far more than do scientists.  The sides of a good faith critical discussion should agree on terms and proceed from there. Although I find questions over definitions to be inherently uninteresting and unimportant distractions (outside of the law and similar activities), since Purdy makes the claim that Wilson’s Half-Earth thesis is an Anthropocene argument by another name, we might briefly examine if it is.39  

Is the Half-Earth hypothesis an Anthropocene argument?  I think the answer is “no.”  First of all, Wilson admits that the problem is real, that the biomass of the human species is more than 100 times that of any large animal that has ever lived.  But he also believes that the vast majority of species that comprise the current biodiversity of the world can still be preserved (i.e. the Eremocene/Anthropocene is where we are heading, but we are not yet there in any final sense).  This can be done by preserving half of the planet as habitat.  This is not a prescription for a human monoculture with a diminished natural periphery or greenbelt, but the opposite: an accommodation of the natural world as a thing apart from us, a steady-state, hands-off stewardship while curbing our own excesses.  It is mitigation.

My sense is that Wilson’s perspective of the natural world as a “self-willed and self-directed” prior category that is deserving of our protection as remote stewards capable of protection or destruction, is sound.  The biggest part of this protection would be simply leaving it alone rather than a subset to be managed—an adjunct category—or a thing permanently wrecked to be tolerated, and adapted to (as we adapt it to us) insofar as it meets or does not interfere with our needs. 40

But even if Wilson’s admission of the human impact on the biosphere and a set of policies to preserve half of it technically render his argument an Anthropocene perspective, there is still a substantive difference: the difference between attempting to manage nature, and leaving a large portion of it alone.  It is the difference between adaptation to and cultivating unfolding wreckage and mitigation through noninterference. 

In this sense, Wilson’s Half-Earth is not so much an Anthropocene thesis as it is an attempt to preclude a human monoculture by setting aside half of the planet through a policy of noninterference and not involved management.  In taking him at his word, I am inclined to say that Wilson seeks to avoid the Eremocene by preserving diversity, rather than an Anthropocene perspective that declares nature to be dead and aspires to somehow live well among the wreckage.

Purdy correctly writes off Wilson’s view of economic growth as “A naturalized logic of history” and calls it “technocratic” (“technocrat”/“technocratic”/”technocracy” are variations of a favorite smear among the post-Boomer generations, although the word appears to have multiple related but different definitions, one being “a specialized public servant.”  I wonder if they would lump the men and women who implemented the New Deal, the U.S. industrial mobilization during WWII, and the Marshall Plan into this category).  When reading the review I got the feeling that Wilson’s powerful sociobiological arguments rankles Purdy’s strong attraction to democratic theory and related philosophy based on human exceptionality.

Ironically Purdy admonishes the author for providing no blueprint for implementing the half-planet model, yet offers nothing stronger than generalities about global democracy.  He also writes “[a]lthough Wilson aims for the vantage of the universe—who else today calls a book The Meaning of Human Existence?—the strengths and limitations of his standpoint of those of a mind formed in the twentieth-century.”  One could just as reasonably ask: who else today calls a book After Nature, regardless of whether “nature” is intended as metaphor, an outdated concept or construct, the living world and physical universe as things-in-themselves, or some or all of the above? 

Likewise the bit about “the mind formed in the twentieth-century” suggests a tone of generational chauvinism, a latter day echo of “[t]he torch has been passed to a new generation…” perhaps.  He dismisses Wilson’s love of nature as and his general outlook as parochial to the twentieth-century United States—and odd claim to make against the world authority on ants, the man who coined the term biodiversity, the standard-bearer of sociobiology, and a man who was bitten by a rattlesnake as a youth. 

The larger implication of Purdy’s dismissal of Wilson as a well-meaning but ultimately avuncular old provincial is itself a kind of local snobbery and presentism—the apparent assumption that anyone from an older generation is insufficiently evolved or sophisticated in his thinking to embrace the eschatological utopian clichés and bubbles of a later generation (Purdy was born in 1974, and so was therefore no less a product of the twentieth-century than is Wilson).  As such Wilson is a representative of just another misled perspective to be weighted against cutting-edge sensibilities, found wanting, and waved away in spite of a modestly good effort at the end of an impressive career. 

I would venture that Wilson knows both nature and history better than Purdy in terms of experience—he lived through the Great Depression which was also the period of the regional ecological disaster called the Dust Bowl and was a teenager during the Second World War.  These are hardly events likely to instill an excessively benevolent or uncritical view of nature or human nature.  Purdy may be right about the devastation wrought by neoliberal globalization, but I believe he is wrong about Wilson and his goal.  Both men concede the necessity of reconfiguring the human relationship with the planet.  Wilson calls for a “New Enlightenment” and a sensibility “biophilia” [regarding the latter, see The Future of Life, 134-141, 200]  Purdy dismisses Wilson’s feelings toward nature as just more unrequited love.  And yet Wilson’s biophilia does not seem incongruent with Purdy’s own “new sensibilities”.            

When reading Purdy’s review of Wilson’s book, I was reminded of a story of an earlier legal prodigy, Oliver Wendell Holmes, Jr., who, when as a senior at Harvard, presented his Uncle Waldo with an essay criticizing Plato. Emerson’s taciturn reply: “I have read your piece.  When you strike at a king, you must kill him.”41  In spite of some good observations about weak points in Wilson’s outlook (and especially in areas outside of his expertise), Purdy’s review didn’t lay a glove on the great scientist or his general prescription.

Where Purdy is right is in the failure of human self-restraint to materialize on a scale to save the planet.  Decades of dire warnings from environmentalists have failed to arouse the world to action.  It seems unlikely that Wilson’s prescription will be anymore successful.  What is required is drastic, top-down action by the nations of the world.  I will discuss this in a later post. 

My reading of Wilson is that the Half-Earth goal is what needs to be done in order to save the world’s biodiversity to include humankind as a small categorical subset.  He leaves the messy and inconvenient details to others.  Wilson and his idea are very much alive and if we wish to remain so, we must take it to heart.  As a person schooled in realism, I have long believed that if necessary measures are rendered impracticable under the existing power or social structure, then it is the structure and not the remedy that is unrealistic.  But the prescription has to be possible to begin with.  Let this be the cautionary admonition of this essay.      

My sense is that Wilson is right, but that his prescription is unlikely to be realized.  In my next post, I will offer what I believe could be a general outline to save the planet from environmental catastrophe.  

Adam Frank and the Biosphere Interpretation: the Anthropocene in Wide-Angle

Adam Frank, Light of the Stars, Alien Worlds and the Fate of the Earth,                               New York: W.W. Norton & Company, 1018, 229 pages.

Disclaimer: I am currently still reading this book (Frank gave an admirable summary of his ideas in an interview with Chris Hedges on the program On Contact).

Any book with endorsements by Sean Carroll, Martin Rees, and Lee Smolin on the dust jacket is likely to catch the attention of those of us who dabble in cosmology.  Adam Frank’s book is not about cosmological speculation or extrapolations of theoretical physics.  It is about the environment in the broadest of contexts.  It characterizes two distinct but overlapping worlds that ultimately merge.  The first is a view of life in a cosmic sense and the other is about life and civilization in a human context and scale.

On the first point, Frank sees the Anthropocene as just another transition: humans may be causing mass extinctions, but as mammals we are equally the product of a mass extinction (the extinction of the dinosaurs allowed mammals to rise to come to the fore).  Hey, these things happen and some good may come out of them—we did.  Life will go on even if we don’t and if we ruin the world as we knew it, relax—nature with deal with it after we are gone and will create something altogether new out of the wreckage.  The Anthropocene may be bad for us—and many of our contemporary species—but we are simply “pushing” nature “into a new era” in which Earth will formulate new experiments (as all life, individual creatures, species, and periods of natural history are experiments).  We are just another experiment ourselves, quite possibly a failed one (and, if we really screw things up, the Earth might end up as a lifeless hulk like Mars or Venus). 

This larger amoral picture—although undoubtedly true—seems ironic coming from someone as affable, as glib as Frank.  But the wide-angle gets even wider.  When talking in astrological terms, it is inevitable that any thinking reed will be dazzled by the numbers and characterizations of the dimensions of the night sky, of our own galaxy and the uncounted billions of others scattered across observable universe beyond it.  In this respect, Frank (like any astronomer or astrophysicists throwing numbers out about the cosmos) does not disappoint.  If he had left things here, I would conclude that he is likely right, but that no thinking, feeling being could surrender to such fatalism without a fight. After all nature makes no moral suppositions, but moral creatures do.  But he does not stop there.   

Over the expanse of our galaxy (to say nothing of the observable universe), it is likely that life is common or at least exists in numerous places among the planets orbiting countless trillions of stars in the hundreds of billions of galaxies.  It seems likely that humans are rendering the Holocene as a failed phase of the experiment, because it produced us.  But life will likely persist in some form regardless of how things turn out here.

Where Frank transitions from the very large to the merely human, he synthesizes the amorality of Gray with the mythmaking of Scranton toward an end perhaps along the lines of Wilson.  Unlike Gray there is no tone of judgment or chastisement.  On the contrary, he believes that the whole good versus bad placing of blame of the various “we suck” perspectives should be avoided: our nature absolves us from judgment; we are just doing what any intelligent (if immature) animal would do in our situation.

Frank analogizes humankind to a teenager—an intelligent, if inexperienced, self-centered willful being who assumes that his/her problems are uniquely their own and therefore have never been experienced by anyone else before.  He assumes that the sheer numbers of planets in our neighborhood of the Milky Way suggest that there are plenty of other “teenagers” in the neighborhood, some of whom have died of their folly and the inability to change their ways.  Others may have learned and adapted.  As for us, we need to grow up, change our attitude, and learn to sing a new and more mature song.  Frank sees the human capacity for narrative as the way out, except, unlike Scranton, he believes new myths to be our potential salvation rather than just a way to die with meaning.

In an interesting parallel to Frank’s view of humans as cosmic teenagers, Wilson characterizes us and our civilization in the following terms: “We have created Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology.  We thrash about.  We are terribly confused by the mere fact of our existence, and the danger to ourselves and the rest of life.” [See Ch. 1 “The Human Condition” in Wilson’s The Social Conquest of Earth, p. 7].  So how are we supposed to grow up?

According to Frank, in order to reach a steady-state level of human life on the planet, we need new myths about what is happening in order to drive “new evolutionary behavior.”  We need narratives that will not only allow nature to proceed (a la Wilson), but which actually enhances nature—make a vibrant biosphere that is even more productive.  The new narratives would provide “a sense of meaning against the universe.”  They will be a way out.   On this point he is like Wilson in his attempt to merge the arts and science to address the problems and embrace an all-loving biophilia.

As with Purdy, Scranton, and Wilson, Frank believes that a global egalitarianism would be necessary to achieve a steady state.  Once again the problem is how to do it.  How do we generate these narratives in a world where some powerful leaders do not concede that there is even a problem?  If the threat of nuclear annihilation and the urging of a world-historical intellect like Albert Einstein after the bloodiest war in all human history did not push humankind even an inch toward merging into a single egalitarian tribe, one must wonder if anything can (and the history of the past century shows, that when you redistribute wealth you only standardize misery).  In 1946 everybody believed that the atom bomb existed, while today, there are powerful interests and world leaders who still deny the reality of human-caused climate change. Human beings would have to completely reconfigure our relationship with nature and with each other and do it in the immediate future.  Could this be done even at the gunpoint of environmental catastrophe?  How would a candidate in a democratic system in a wealthy nation pitch such transformation to the electorate?  Again: how do we get there?  As they say Down East, you can’t get there from here.

Similarly, Frank’s analogy of humankind to a self-absorbed teenager is suggestive, but is the comparison supposed to fit into a context of a lifecycle that is historical or natural historical (i.e. is he talking about an adolescent in the context of human civilization as a phenomenon of 9,000 years, or of a species that is 200,000 years old?)?  If his idea is that our species has an outlook that is adolescent in terms of evolutionary development, then it seems unlikely that we can grow up quickly enough to become a bona fide adult, that the necessary maturity to turn things around will not occur in the timeframe in which the environmental crises will unfold.  Wilson talks in similar terms in at least one of his books, that we must start thinking maturely as a species un-tethered from old theistic myths and tribalism.  And yet the current state of affairs suggests that we are as far away from that point as ever, that such tribalistic tendencies as ethnic nationalism and fundamentalist religion are as strong as ever.  The human nature analogized by Frank and Wilson are not just sticking points to be overcome or hurdles to be jumped, but rather central facts of our animal nature that currently appear to be insurmountable.

One small issue I have with the book is the fact that the existence of life and civilizations on other planets is at this point purely conjectural.  The dazzling numbers Frank presents plausibly suggest that life may be fairly common—indeed, the numbers make it seem almost ridiculous to think otherwise.  But, if I recall my critical rationalist philosophy correctly, it is impossible to falsify probability, and at this point, such a claim is pure speculative probability rather than actual observation or corroboration.  He talks about a conjectural “great filter”—the idea that intelligent life kills itself off (if its maturity is far behind its intelligence).  Another pregnant conjecture.

What I liked especially was his description of James Lovelock and Lynn Marguis’s Gaia hypothesis, that life is an active “player” in the environmental crisis and that it is able to keep the atmosphere oxygen rich by preventing its combination with compounds thus resulting in an oxygen-free “dead chemical equilibrium” like the atmospheres of Mars and Venus.  The biosphere therefore acts as a regulator keeping oxygen at a near-optimal 21% level of the atmospheric mix (it was not clear to me how severe periods such as ice ages fit into the “regulation” of the environment). This regulated balance is called a “steady state” (Lovelock analogizes this to the way the body of a warm-blooded organism regulates its temperature).  Lovelock intended to call this idea the “Self-regulating Earth System Theory,” but at the urging of William “Lord of the Flies” Golding, settled instead on the more poetic “Gaia.” 

With an interested “in the question of atmospheric oxygen and its microbial origin,” Lynn Margulis, wife of Carl Sagan, teamed up with Lovelock in 1970.  As Frank notes, “[w]here lovelock brought the top-down perspective of physics and chemistry, Margulis brought the essential bottom-up view of microbial life in all its plenitude and power” [p. 125].  Frank observes that “[t]he essence of Gaia theory, as elaborated in papers by Lovelock and Margulis, lies the concept of feedback that we first encountered in considering the greenhouse effect” [p. 125] and “Lovelock and Marguils were offering a scientific narrative whose ties to the scale of world-building myth were explicit” [p. 127].  As an observation statement, it would seem that the Gaia hypothesis characterizing a “self-regulating planetary system,” an observable phenomenon is something close to a scientific organon supported by Lovelock’s ingenious “Daisyworld” thought experiment; whether or not the biosphere is a singular living entity that will eliminate humans as a pathogen would still seem to be a metaphysical assertion.  

Unfortunately, this is as far as I have read in his book

Conclusion 

In building on Frank’s example of humanity as an experiment flirting with failure, a friend of mine suggested the comparison of the individual human being in a time of collapse to an individual cancer cell.  Imagine that such a cell was somehow conscious and could reflect on its complicity in killing a person.  It might express regret yet philosophically conclude “but what can I do? I am a cancer cell.”  So it is with people and their kind.  Is this a denial of agency or a facing of facts?  Is it an admission that human beings—neither good nor bad in the broad focus of nature (although objectively out of balance with its environment)—are like cancer cells killing a person regardless of personal moral inclinations?  We are just the latest imbalance—like the asteroid (or whatever it was) that killed off the dinosaurs, and the other things that caused the other great extinctions of the Earth’s natural history.  And so we arrive back at John Gray and biological destiny. 

But even if we are cancer cells or merely a rapacious primate, I don’t accept such a fate—again, Nietzsche’s Will.  We are also a “thinking reed.” Even if there is no free will, there is still a will with an ability to learn from mistakes and experience—we must act as if there is free will.  Gray’s outlook might be a true position and yet no person as an ethical agency can morally abide by it.  We are audacious monkeys and have to answer two questions: can we rise above our biology through reason and moderation and solve the seemingly insurmountable problems resulting from our own nature, and will we?  I believe that he answer to the first is a cautions “yes,” The answer to the second question however may well render it an academic point.  

Consider the following historical thought experiment, also suggested to me by David Isenbergh:  Imagine if you could return to the late Western Roman Empire a few decades before it collapsed.  You see all of the imbalances, injustice, and misery from that period.  You identify yourself as a traveler from the future and tell the people you meet (you can obviously speak fifth-century Latin) that if they and their civilization did not reform their ways, there would be an apocalyptic collapse that would result in 500 years of even greater darkness and misery.  Suppose too that you even were even able to get this message to the powers that be.  Do you think you would be listened to, or would you be treated as mad as events continued unaltered on their way to disaster?  As I have noted elsewhere, in a world of the blind, a clear-sighted man would not be treated as a king, but rather a lunatic or heretic and would likely be burned at the stake if caught.   

In Malthusian terms, we are a global plague species.  In geological/astronomical terms, we are just the latest phenomenon to fundamentally alter and test the resilience of life on Earth.  But even if these observations are true, we are also moral beings, and to embrace them as inevitable and to recommend a posture of adaptation and wishful thinking that the planet will not deteriorate as far as the chemical equlibria of Mars and Venus, is the equivalent of justifying WWII by pointing to postwar successes of Germany, Japan, and Israel (as regards the former two, one could make the observation that sanity followed psychosis).

At the end of the review of Scranton’s Learning to Die in the Anthropocene, I asked: what is meaning in a dying world?  I will add only this: if the human story is coming to a close, then there is one great if austere luxury of being a part of this time that is as interesting as it is unsettling.  As individuals, we never know the full story of our lives until the very end (if even then).  If the end of progressive civilization is upon us in a matter of decades, then we have a greater and fuller understanding of the overall human project than any people at any time in history.  Rather than narratives of progress or decline, agrarian or democratic myths, historicist cycles or eschatology heading toward a terrestrial or providential endgame of history with salvation at the end, we may come to learn that history was just the progress of a plague species toward its own destruction by the means of its extended phenotype that we call civilization.

Finally, one of the things I have taken away from these six books and from my own discussions on the topic is that there are two powerful generational disconnects at play.  I have noticed a powerful generational disconnect common among older people (say, over 80) who have little or no idea of the scale of the problems facing us—that modernity, civilization, their species generally are already failed projects—but who have a certain understanding of history. 

The other disconnect is among young people who are far more in touch with ecological issues, see the problems for what they are, and whose various diagnose and potential remedies are at least on the scale of the problems, but whose prescriptions are unrealistic to the point of utopian absurdity.  On this point, Purdy and Scranton are anomalies who know history as well as anybody, but who seem to take after others of their generation (and the subsequent generation) in being unable to apply its lessons.  Frank and Wilson know natural history and yet also speak of a global egalitarian regime.  To be fair, nobody has an answer, and even the one I find to be most realistic, when walked through step-by-step ends up as being something akin to utopian itself.

Several times I have analogized the crises of the environment to the early phases of WWII.  The current situation is unlikely to unfold as quickly as that conflict, and it is difficult to know the point in the conflict at which we find our selves by analogy.  It is unclear whether we are at the point in history analogous to the doomed conference at Versailles, the Japanese invasion of Manchuria, the Occupation of the Rhineland, the Spanish Civil War, the Czechoslovakian crisis, the invasion of Poland or France, Operation Barbarossa, or the attack on Pearl Harbor.  As I noted in the introduction, it is also unclear when we will cross a point of no return.  Are we to be Churchills and Roosevelts, or are we to surrender to our fate? 

There is a difference however. The solutions that brought WWII to a successful conclusion for the Allies were devised and implemented within an existing context of human life that was more or less the same after the war, that of modern industrial civilization. If we are to successfully address the environmental crises, we will have to fundamentally reconfigure, reorient the human relationship with the biosphere. Rather than an array of robust imaginative domestic measures to fix large but essentially conventional problems (economic depression, a global total war), the solutions now needed would be akin to forcing an entirely new phase of human civilization, like the shifts from hunter-gatherer life to agricultural (and urban) life, or agrarian civilization to industrial.

Sometime later this year or early next year, I hope to post another insufferably long discourse on how we might chance to turn things around.

Notes

  1. William Strunk, Jr. and E.B. White, The Elements of Style, New York: Macmillan Publishing Co., Inc., 3rd ed., 1979, pp. 71-72, 80.
  2. For the Eremocene or “Age of Loneliness,” see Edward O. Wilson, Half-Earth, Our Planet’s Fight for Life, Nee York: W.W. Norton & Company, 2016, p. 20.  For Anthropcene, or “Epoch of Man,” see p. 9.
  3. David Archer, The Long Thaw, Princeton University Press, 2009, p. 1.
  4. On political disputes disguised as scientific debates see Leonard Susskind, The Black Hole War, Boston: Little Brown and Company, 2008, 445-446.
  5. Roy Scranton, Learning to Die in the Anthropocene, San Francisco: City Lights Books, 2015, p. 14.
  6. Elizabeth Kolbert, The Sixth Extinction, New York: Henry Holt and Company, 2014,and Field Notes from a Catastrophe, New York: Bloomsbury, 2006 (2015).
  7. See generally Edward O. Wilson, The Future of Life, New York: Alfred A. Knopf, 2002.
  8. Alasdair Wilkins, “The Last Mammoths Died Out Just 3,600 Years Ago,  But They Should Have Survived,” March 25, 2012).
  9. Gray cites this term to Wilson’s O. Wilson in Consilience, New York, Alfred A. Knopf, 1998. Apparently Wilson also denies “that humans are exempt from the processes that govern the lives of all other animals.”  Wilson uses the similar term Eremocene in Half-Earth, p. 20.
  10. Edward O. Wilson, The Future of Life, p. 29. 
  11. See Karl Popper’s essay “A Relist View of Logic, History, and Physics” in Objective Knowledge, Oxford: Clarendon, 1979 (revised ed.), 285.
  12. George Kennan, Around the Cragged Hill, New York: W.W. Norton & Company, 1993, p. 142.
  13. Regarding the regulation of gases in the atmosphere, see Lynn Margulis, Symbiotic Planet, 113-128.
  14. On stable equlibria and tipping points, see generally, Per Bak, How Nature Works, Springer-Verlad New York, Inc., 2006.
  15. For Gray’s perplexing views of conscious and artificial intelligence, see Straw Dogs, pp. 187-189.                              We do not even know what consciousness it. It is therefore remarkable that Gray can assert that machines “will do more than become conscious. They will become spiritual beings spiritual beings, whose inner life whose conscious thought is no more limited by conscious thought than ours.” Leaving aside weasel words like “spiritual,” it seems likely that if machines ever do become conscious, it will be the result of an uncontrolled emergent process (the way that consciousness arose as a natural phenomena), and not the product of technological progress along the current lines of algorithms and hardware. Consciousness appears to be the result of the physical (biological/electrochemical) processes of the brain.  As anyone who has know someone with a brain injury, mental illness, or Alzheimer’s disease knows, to the degree that the brain is damaged, diseased, damaged, or otherwise diminished, the mind diminishes correspondingly, if unpredictably.  And yet like all phenomena emerging from more primal categories, the mind is not fully reducible to physical processes.  The objections to the reduction of consciousness to “mechanical principles” made by Leibniz in his Monadology, are as alive and well today as they were in 1714.  See G.W. Leibniz’s Monadology, An Edition for Students, University of Pittsburgh Press, 1991, Section 17, pp. 19, 83-87.
  16. For Gray’s prescription for the human predicament, see Straw Dogs, pp. 197-199. His idea of “the true objects of contemplation” and his “aim of life as simply to see” are sensible if austere goals toward greater intellectual and psychological honesty, and are reminiscent of Nietzsche’s idea of “forgetfulness” expounded in Section 1 his “On the Uses and Disadvantages of History for Life.  But where Nietzsche advocates animal forgetfulness to allow people the freedom to act forthrightly and without inhibitions, Gray believes that action only makes contemplation possible and that the real goal is understanding without myths, false self-awareness, and the illusion of meaning.  See Untimely Meditations, Cambridge University Press, R. J. Hollingdale, trans., 1983 [1874] pp. 60-67. As regards the environmental crises, Nietzsche’s prescription would allow for action (although action without historical memory would seem to be a recipe for catastrophe as a basis for policy), where Gray would allow for a dispelling of illusions for which others might allow for meaningful action even if Gray does not believe it is possible.  His idea also has a curious, if inverse relationship to that of Roy Scranton in Learning to Die in the Anthropocene.
  17. See Holmes’s letter to Lewis Einstein dated May 19, 1927 in The Holmes-Einstein Letters New York: St. Martins Press, 1964) 264-268. On this point, Holmes is quoting Clarence Day from his book This Simian World.  
  18. Charles Dickens, A Christmas Carol.
  19. Malthus speaks of the leveling of population to match resources, p. 61.
  20. By “closed” I mean deterministic.  See generally, Karl Popper, The Open Universe, London: Routledge, 1982. In a closed universe, all events are determined and may perhaps exist in the future if time as characterized by Einstein’s block universe model is correct.  As Popper observes, in a closed universe, every event must be determined where “if at least one (future) event is not predetermined, determinism is to be rejected, and indeterminism is true” (p. 6).  In a closed universe there is chaos (deterministic disorder), and in an open universe there is randomness (objective disorder), and therefore the possibility of novelty and freedom.
  21. This analogy was suggested to me by David Isenbergh.
  22.  See Chapter 10,“Fecundity,” in Pilgrim at Tinker Creek, New York, HaperCollins, 1974, pp. 161-183.
  23. For instance, see his reply to Jedediah Purdy in the January 11, 2016 number of Boston Review.
  24. See generally, Robert D. Kaplan, The Coming Anarchy, Shattering the Dreams of the Post Cold War, New York: Random House, 2000.
  25. For instance see Thomas Cahill’s popular history How the Irish Saved Civilization, and Barbra Tuchman’s chapter “Is this the End of the World: The Black Death,” in A Distant Mirror.
  26. Wilson, The Future of Life, 27.
  27. The Open Society and Its Enemies,  Princeton University Press, 2013 [1945], p. xliv.
  28. William Tecumseh Sherman, letter to James M. Calhoun, et al.  September 12, 1864.  Sherman’s Civil War, Selected Correspondences of William T. Sherman, 1860-1865, Brooks D. Simpson and Jean D. Berlin, eds., Chapel Hill: University of North Carolina Press, 1999, pp. 707-709.
  29. According to Jane Jacobs, the way that healthy economies arise is through the naturalistic grown based on the natural and human resources of a region and import-shifting cities.  This cannot be forced or created as a part of a top-down plan (unless it is to simply rebuild existing systems as with the Marshall Plan after WWII).  See generally Jane Jacobs, Cities and the Wealth of Nations, New York: Random House, 1984                                                                            The idea of correcting economic imbalances through structural remedies would probably make bad situations even worse. My reading of historical events like the Russian Revolution and the period following the Chinese Civil War is that attempts to redistribute wealth only standardizes misery outside of the rising clique, the new elites.  As David Isenbergh observes, power concentrates, and when it does, the new elites tend to act as badly as the old ones.  This is one reason why Marxism—although insightful in its historical observations—fails utterly in its prescriptions.
  30. As the late Tony Judt observes, “[t]here may be something inherently selfish about the social service state of the mid-20th century. Blessed with the good fortune of ethnic homogeneity and a small, educated population where almost everyone could recognize themselves in everyone else.” See Tony Judt’s Ill Fares the Land, New York: The Penguin Press, 2010.
  31. The analogy of a world dominated by ants or termites was suggested to my by David Isenebrgh.
  32. See Carl Saffina, Beyond Words, What Animals Think, New York: Henry Holt and Company, 2015, and Bernard Heinrich, Mind of the Raven, New York: HarperCollins, 1999.  See also Frans De Waal, Are We Smart Enough to Know how Smart Animals Are?, New York: W.W. Norton & Company, 2016, and Mama’s Last Hug, New York: W.W. Norton & Company, 2019.     
  1. On cellular intelligence, see James Shapiro, Evolution, a View from the 21st Century, Saddle River, NJ: FT Press, 2011. On symbiosis, see Lyn Margulis, Symbiotic Planet, New York: Basic Books, 1999.  Elizabeth Kolbert, The Sixth Extinction, New York: Henry Holt and Company, 2014,and Field Notes from a Catastrophe, New York: Bloomsbury, 2006 (2015).
  2. For instance, see generally Edward O. Wilson’s The Future of Life, New York: Alfred A. Knopf, 2002, pp. 22-41.
  3.  See note 2.   
  4.  For instance, Purdy states that “Wilson is in the minority of evolutionary theorists in arguing that human evolution is split between two levels of selection: individual selection, which favors selfish genes and groups.”  I have not polled evolutionary scientists about whether or not they accept multi-level evolution, but it is safe to say that it is the not a radical idea of an apostate minority.  Although not embraced by “selfish gene” ultra-Darwinists, multi-level selection a widely-accepted idea among evolutionary biologists sometimes called “naturalist” Darwinists (see generally Niles Eldridge, Reinventing Darwin, 1997.  See also Stephen Jay Gould, The Structrue of Evolutionary Theory, 2002).  Multi-level selection was first speculated on by Darwin himself and finds its origins in The Dissent of Man, 1871, p. 166.  “It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over other men of the same tribe, yet that an increase in the number of well-endowed men and the advancement in the standard of morality will certainly give an immense advantage to one tribe over another.”
  5. There are formulas to predict the loss of biodiversity relative to loss of habitat, so of which decreases by smaller fractions. Edward O. Wilson, Half-Earth.
  6. Boston Review, January 11, 2016).
  7. On the unimportance of definitions in critical discussions, see Karl Popper, Objective Knowledge, 58, 309-311, 328.
  8. See Wilson, Half-Earth, pp. 77-78.  In response to a question on this point during a discussion and book signing on November 16, 2016, David Biello gave a similar interpretation of Wilson’s perspective.  Biello book is The Unnatural World, New York: Scribner, 2016.
  9. Mark DeWolfe Howe, Justice Holmes: The Shaping Years, 1841-1870, Cambridge, Belknap Press, 1957, 154. 

The Four Categories of The Establishment

By Michael F. Duggan

In this posting, I would like to propose an integrated way of thinking about political and policy leadership and advisement in terms of categories defined by personality type as well as by role and function.  Although I do not subscribe to the fallacy of psychologism—reducing a person’s ideas to their mental state instead of taking the concepts on their merits—I do believe that personality does play a role in the ideas one chooses and therefore in one’s policy outlook.  I do not know if anyone has suggested a similar model, but I am not aware of any.

Rather than examining policy outlooks on a conventional ideological spectrum from left to right (although these categories certainly fit into my scheme), perhaps we should look at them in terms of how categories of policy outlooks exist in relative proximity to each other in terms of degree of moderate-to-severe, by categories of temperament/personality/imagination, and by type in terms of approach/function in implementing policy.  Some categories are ideologically neutral and take on the doctrinal coloration of their milieu.  Because of this, my model has elements of a scale and a spectrum.  The idea is not to look at these things in terms not entirely reducible to ideology (which it treats only as a single factor or intensifier), but rather how they function in the real world in regard to competing individuals and their policy positions. 

In policy, as in business, these categories of leaders and advisors are Conventionalists, The Establishment (and Establishment Types), Mavericks, and Rogues. These categories are seldom found in unalloyed form, and they may overlap, influence, build upon, and cross-pollinate with each other, even in a single person.  There are also multitudes of followers who also break down along these lines.  This is not a completely fleshed-out idea, but one that I am just throwing out in nascent form.  Per usual I wrote this very quickly, so please forgive any mechanical errors

Conventionalists
Conventionalists are men and women who subordinate their views to the perspective a la mode and whose allegiance to these outlooks and regard as necessary in order to advance themselves. These operators act with an eye to the powers that be who promote and embody the dominant ideology of the time.

The Conventionalists are often careerists and credentialists, even though credentials are seen as value-neutral instruments necessary to get ahead.  In periods of sensible policy outlook, these people can be constructive in that they reinforce positive trends by their numbers, if not a strong commitment to the good ideas.  They blow with the wind. 

Beyond self-interest, the perspective of the Conventionalist is often (at least publicly) non-ideological in a negative sense (realism may also be non-ideological, but has been constructive in its commitment to practical goals and in its result-oriented flexibility).  The Conventionalist point of view is tends toward moral neutrality or petty, functional psychopathy and the amoral sensibility that whatever advances one’s career is by definition good, regardless of the ethical and practical consequences.  Such people will adhere to failed policy as long as it continues to be the dominant outlook or until they adumbrate its failure and the outlook that will succeed it.  The driving forces in this type are the ego, vanity, and the power drive.  

Today the Conventionalist embraces and reinforces the orthodoxy of the Washington Consensus, the outlook of the DNC and RNC and “The Blob” of the U.S. foreign policy Establishment. This ideology subscribes to neoconservatism/neoliberalism, economic globalization, a domestic economy founded on Big Finance and an ever-growing split between high-end and low-end services, and U.S. military hegemony and the industries related to it.  By virtue of the dominance of this outlook in the upper reaches of the government, it has been the controlling view of the Establishment in recent years.  As Andrew Bacevich and others have observed, you will not get anywhere in government today if you do not swear allegiance to this “deeply pernicious collective naivete” (see America’s War for the Greater Middle East, 363).  An Establishment characterized by lock-brain conformity to shared assumptions drives the dominance of conventionalism at all levels of policy.  Individuals of this type should not be confused with lower level career government servants that are the backbone of the Federal Government and tend to avoid the political intrigues of successive administrations.

We should note that a good (i.e. loyal or compliant/cooperative) subordinate may be a genuine protégé, or he/she may be an earnest believer in a different outlook biding his/her time (e.g. the “good soldier,” the conservative William Howard Taft, during the more progressive administration of Theodore Roosevelt).  On a less positive note, he or she may equally be an opportunistic true believer playing the part of the sycophant and waiting for their time.  One of the most common things in Washington, D.C. is the true believer boss cultivating true believer underlings.

The Establishment and Establishment Types
The Establishment is the governing mean, the formal and informal structural context in which all of these types exist and operate.  It is a median (and medium) of people and outlook. It is in principle value-neutral but it always takes on the character and ideology of the people in it (today this is the Neoliberal; in the late 1940s it was dominated by moderate realists who were increasingly replaced by hardliners).  It is the generalized governmental temperament of a period, an aggregate of multiple perspectives into a status quo in which strong-minded individuals may divide the policy community into camps—into a majority as well as influential plurality and minority outlooks.  The dominant of these is the official view of the government, although historically, there have often been balancing and countervailing currents.   

An Establishment representing the outlook of an administration may avail itself of Mavericks (see below) and take on the character of their ideas (e.g. the New Deal, the Marshall Plan).  As a thing-in-being, there is always an Establishment of strong players in the system, and it seems counterintuitive to have an Establishment without a dominant view.  As with nature, a policy environment hates a vacuum and a strong personality or coalition will tip an unstable equilibrium one direction or another.  On a related note, the best presidents are always at the heart of their administration, and therefore determine or heavily influence the direction of the Establishment of their times. There are always balancing elements, resistance, and cross currents from other bastions of power and estates of the sovereign whole or aggregate.

Because the Establishment takes on the character of the dominant perspective (which can be top-down), it is altogether possible to have a Maverick or even a Rogue Establishment.  The most constructive periods of the American Establishment are those that utilize constructive/innovative ideas of Mavericks (as with the New Deal—Roosevelt was both a Maverick and Establishment Type who listened to and employed the energies of many Maverick public servants).  In terms of historical context, it is tempting—at least for me—to measure the Establishment of prior and successive periods by the baselines of the social democratic domestic Establishment of 1933-1970 (or thereabout), and the foreign policy and military Establishment of 1939-1949 (or thereabout). 

Leaders and the Establishments they head vary with the policy context and situational dictates of the time.  A sensitive leader intuits what political approach is called for and then attempts to meets those needs in terms of leadership, management, and policy/goals.  History there haven been Bringers of Order (Charlemagne, Alfred the Great, other notable leaders of the late Dark Ages that allowed for the conditions for comparative order of Later Middle Ages), Caretakers/Preserves of the Status Quo (most of the U.S. presidents between Lincoln and Theodore Roosevelt), Conservative Reformers (Grover Cleveland, and the early Theodore Roosevelt), Progressives (President and Bull Moose Candidate Theodore Roosevelt, Woodrow Wilson—the latter in a economic, if not social justice sense), and Transformers (the Founders/Framers taken as a whole, Lincoln, Franklin Roosevelt).  There is also an often corrupt category (e.g. urban political machines based on ethnicity and identity) that picks up the slack when the official governmental structure is insufficient or is not doing its job. This latter category, although often unsavory, is just as often constructive.

The Establishment Type
There is a distinction to be made between the Establishment Type and the The Establishment.  The Establishment Type tends to be temperamentally conservative, but the best are innovators and readily embrace utilize Mavericks, their ideas, and prescriptions (e.g. George C. Marshall as Secretary of State with Kennan as the Director of The Office of Policy Planning).  They differ from Conventionalists in that they put the system and its well being above themselves and their ambitions and in the fact that they seek to do what is right in a broader sense than mere careerism. 

The best Establishment Types employ the creativity of mavericks, and manage and contain rogues.  In bad times, Establishment Types balance and stabilize.  Under good leadership they are also a positive element. They subordinate their careers to duty and service.  Under effective leadership, they tend to rise on a basis of merit rather than credentials. The best of this sort would include the the New Deal Cabinet and the Wise Men of the 1940s such as Charles Bohlen, Averell Harriman, Harry Hopkins, Robert Lovett, George Marshall, and John McCloy.      

Mavericks
Mavericks are the idea men and women—intellectuals—and may be practical or impractical (or even utopian), constructive or pernicious.  The best of these are Cassandras and Jeremiahs who do not rely on theories so much as insight and may design doctrines of their own but may equally be the worst of true believers touting rigid ideology and dogma.  The former are the intuitive creative types who see things before others do and more accurately and are able to effectively plan accordingly.  More generally defined, Mavericks can be vigorous and influential intellectuals of any ideological stripe.  In some instances they may embody the cutting edge of the zeitgeist of the times, but may come to be regarded as ambiguous or even harmful in a larger historical context and retrospect (e.g. the navalist historian and policy theorist, Alfred Thayer Mahan in driving imperialism and the pre-World War One naval arms race).  

Mavericks are weighed in terms of the effectiveness of their policy prescriptions. In an administrative sense, Mavericks are measured in the degree of their influence as well as their distance from the previous status quo of the Establishment and the centrality of their role in creating a new one.  This is why a moderate realist like George Kennan, who had studied history and knew what worked in the past and what did not and why is as much as a maverick as the first Neoconservatives, who were true believers in a theoretical ideology with questionable historical antecedents.  Kennan’s influence contributed to a moderate, if short lived realist Establishment that was quickly supplanted by more ideological mavericks like Acheson, Nitze, and Dulles.

The best Mavericks are insightful creative types who “think outside-of-the-box” (to use an inside-the-box cliché) and devise imaginative policy solutions.  The worst are true believers or else cynics implementing the desires of powerful interests both inside and outside of government.

The “Good” Maverick
“Good” Mavericks tend to be high-minded realists who see each new situation with fresh eyes and without assumptions other than a broad and deep base of intimate and formal historical knowledge.  Some are outsiders who made it on merit (Hamilton, Kennan).  This type of advisor may seem to be inconsistent by unimaginative Conventionalists and bad or “Malignant” Mavericks when they (Good Mavericks) prescribe different responses to superficially similar situations that are fundamentally dissimilar or when an idea or approach did not produce favorable results when first used. The Good Maverick eschews ideology, group think, and over-reliance on theories and simple formulas.  Historically they have often been a special kind of outsider who succeeded on a basis of merit and insight. To work effectively, this type must be allowed space for creativity and a free hand (as with Kennan in the Office of Policy Planning, and Kelly Johnson in his Lockheed “Skunk Works”). We live in a time that despises constructive Mavericks in policy.

Given the policy types I have already mentioned, it is noteworthy that in my scheme, Mavericks shake things up, where Establishment Types tend to embrace order and the status quo but may be open to new ideas.  It is possible for the dominant strata of an Establishment to be comprised of Good Mavericks co-mingled with Establishment Types (e.g. Harriman, Kennan, Lovett, and McCloy during the immediate Post-WWII era) or else true believers (e.g. John Hay, Henry Cabot Lodge, Alfred Thayer Mahan, Theodore Roosevelt, and Elihu Root, during the age of American imperialism).

It is notable that great leaders, although often difficult to categorize or analyze in terms of systems and general reductions, must have qualities of the Maverick along with the balance, leadership, and management skills to direct the Establishment and lead the electorate.

The “Malignant” Maverick
These are the influential ideologues or true believers in theories who are able to influence leaders and colleagues, and influence policy and the nature/direction of the Establishment. They may do this with native charisma, force of personality, and the skills of departmental and political infighting.  They typically have a showy, if narrow and superficially impressive intellect that may dazzle and persuade. In extreme form they may become Rogues.  We live in a time in which this kind of Maverick has set the keynote for the Establishment.

Rogues
Rogues are the self-interested adventurers, the authoritarian lovers of power for its own sake and for gratification of the ego, the borderline or bona fide sociopathic businessman/woman, plutocrat, or military leader.  Rogues are a more extreme hybrid of the Careerist and the Maverick and may appear to be the latter (or, rather, individuals of the latter category, unchecked may morph into actual Rogues).  Where Mavericks may be either understated or charismatic, Rogues tend to be predominantly charismatic and may be powerful demagogues.  Very often they are populist juggernauts or else infighters who have figured out how to dominate within (and beyond) the rules of the system.

These are people who may reach a position where they can defy the Establishment unless and until they are somehow checked or else may come to dominate it.  They can be useful in time of war as a military type if pointed toward an enemy and then kept on a short leach by a strong and well-established system (it is less clear what to do with them when the war is over).  Regardless of whether they are in business, the military, politics, or policy, they must never be allowed to take over or dominate.

Individuals can begin as Rogue insurgents and end up as Conventionalist Establishment types living off of reputations of bringers of change.

Conclusion
There you have it.  This is by no means a comprehensive list of “types” found in the Establishment: there are also Apostates—disillusioned true believers, idealists, and utopians who may go on to become strong critics of their former programs. There are Whistle-Blowers, a hugely important category that is even more universally despised these days than the Good Maverick. Most obviously, as a functional category, there are Principles—presidents, senators, representatives, cabinet members, department heads, and other high-level appointees.

Finally, there is also a functional category or type that I call Hidden Hands or the Opaque Player as working titles. These are quiet, omnipresent high-level advisors of the inner circle who may be team players or self-interested individuals (this may be the type that Henry Adams characterizes as “masters of the game for the sake of the game,” but may equally be loyal and dedicated public officials). In some cases their true beliefs and motives are unknown outside of their immediate circle and sometimes are not fully known even there. Some of this kind never show their ideological hand publicly, and their views may only be inferred by looking at the leaders they handle. They may be great public servants, true believers, or low-key, high-level adventurers or even careerists. They are the “hidden hands” of administrations.

Regardless of motives, the Opaque Players are typically the “smartest kid in the room” (and in the Establishment generally) and may be a handler of a president or else a henchman or a behind-the-scenes whip or button pusher on his/her behalf. They may be the real “power behind the power,” and were and are sometimes women. In our system, they are often lawyers. They know how to “work the system” and get things done and may be more responsible for implementing a program or agenda than the president him/herself. They may be a Chief of Staff or a personal/unofficial advisor of the highest level in the executive. This is the type who has the ear of the leader—has continual access—and in most administrations, is one of the few who is able and positioned to speak the unvarnished truth to his/her boss. They are able to deliver bad news to the president and offer immediate advice. Not elected, they may be the most powerful people in the government in a practical sense and under a weak leader may be a de facto chief executive.

Early Modern examples of this type may include Thomas Cromwell and Cardinals Richelieu and Wolsey. In our own tradition, Elihu Root may be an example of this type. Power is fluid in a robust system, and this type may be far broader an less apparent than suggested by this definition. As with successful conspiracies, we may never know who the greatest Opaque Players of history were. There is also a lover level version of this kind that may act as a personal emissary, lobbyist, or representative, of the president, a person who speaks with the approved authority of his/her boss (Thomas “Tommy the Cork” Corcoran might be an example).

I am not sure whether this scheme holds any water or if I have even interpreted my own ideas correctly or applied them accurately in terms of analyzing historical leaders and advisors (below).  It is still a very nascent work in progress and I just wanted to get it out there for the consideration of others.  Again, I wrote this very quickly, so please excuse any creative grammar/mechanical mistakes.

Addendum, November 30, 2020: The Most Dangerous Type: The Hyper-Competent True Believer
Like all extremists True Believers differ from one another in the details of their beliefs (fascists and Marxists are both tribalists). Members of this type are not mere careerists, although they are oftentimes the most successful in their field. Unlike simple careerists, they are driven by unquestioned belief and an unexamined certainty in that belief. During periods when their outlook is out of season, they linger in communities of the like-minded. They are not personally corrupt; they seek to implement a program or policies favorable to their beliefs. They are quick to dismantle existing structures, traditions, and precedent that stand in their way, and are therefore not traditional conservatives, but a genera of radical, even when their outlooks are on the right. They may implement what they regard to be traditional views through activist, radical means. They are not in it for personal gain, but rather for actualizing a personal vision in the public sphere, although they will quickly and opportunistically exploit a corrupt leader or regime to further their cause.

Typically they are smart in a focused, often technical sense. Many are at the top of their class in their chosen field, such as the law. They are narrowly brilliant, but may believe in simplistic or naive religious or utopian outlooks. Because of their conspicuous brilliance, they attract young acolytes who regard them to be ingenious, legendary. They inspire fierce loyalty in proteges.

Because of their extraordinary, laser-like intelligence, they may become overconfident and overestimate their abilities in other areas and may fail spectacularly in these (and their ideas/programs may likewise fail spectacularly). They are unlikely to change their views in light of demonstrable failure of their ideas and will construct powerful rationalizations about why their ideas fail. They are therefore at base, irrational in their views, in spite of appearing to he rational and confident. They will deny, rationalize, and transfer the failure of their larger outlook rather than concede to reality. On a related note, there is no practical difference between someone who is irrationally tied to a position and someone who is ideologically wedded to it.

True believers may be opaque or conspicuous players. They are purely conventional it outlook, although they are tactically innovative to the point of genius.

True Believers are the most dangerous people in government.

Addendum, July 2022: The Good People
One of life’s ironies is how people with extreme ideologies and outlooks can be nice people and how people with enlightened outlooks can be horrible human beings. Hitler loved dogs and his secretaries loved him; Churchill’s secretaries hated him (admittedly, Churchill had some ugly views on race and imperialism). William O. Douglas was a holy terror to his law clerks and yet enshrined the right of privacy.

The former category I refer to as The Good People, or those who may be charming, engaging, honorable, loving, polite, and warm as individuals, but who pursue the most illiberal or otherwise destructive of policies in their official capacities. They are often a subset of True Believers. At best, they may be genuinely good people tragically caught up in circumstances and enlisted in a bad cause. It is the sort of person Henry Adams was describing when he wrote, “It is always the good men who do the greatest harm in the world” (he was writing about Robert E. Lee). As Graham Greene writes in The Quite American, “He comes blundering in, and people have to die for his mistakes… God save us from the innocent and good.” (Graham Greene, The Quiet American, New York: Bantam, 1957, quoted by Arthur Schlesinger in Robert Kennedy and his Times, Boston: Houghton Mifflin Company, 1978, 461).

Historical Examples
In order to flesh-out these categories beyond mere criteria, consider the historical examples below.  This is nothing more than a shot-from-the-hip scattershot of opinion.

  • Theodore and Franklin Roosevelt were Maverick presidents who set the tone of the Establishment of their time.
  • George C. Marshall and Dwight D. Eisenhower were military Establishment Types. An imaginative combat commander like Matthew Ridgway was Good Maverick subordinate to them. 
  • Churchill had characteristics of a Rogue, Maverick, and a conservative imperial Establishment Type.
  • Dynamic combat officers like Curtis LeMay, Douglas MacArthur, and George Patton, were extreme, frequently effective military Mavericks bordering on Rogues.  MacArthur was a cooperative Maverick Establishment Type during the rebuilding of Japan but became something like a partisan Rogue during the final phases of his command in Korea (he did return to the Untied States after being relieved, so he still acknowledged civil authority above him).
  • J. Edgar Hoover was a pernicious Rogue who devised a departmental Establishment that exerted influence over the entire government.
  • Huey P. Long was a populist Rogue of a state government and within the Democratic Party.
  • Robert Moses was an Establishment Rogue of the New York Port Authority.
  • Joseph McCarthy was a cynical careerist-turned-Rogue.
  • Heinrich Himmler and Albert Speer are paragon examples of the careerist—the person who does whatever is necessary to advance himself/herself to get ahead, regardless of the regime.
  • Lyndon Johnson seems to have elements of all of the above categories (except perhaps the Conventionalist).  He availed his administrations of Mavericks and Establishment Types and seemed to have both envied and despised the Eastern Establishment.
  • Richard Nixon was an odd combination of a highly individualized (almost outsider), hardball player with interesting contradictions. Like Johnson, he envied and despised the Establishment and eventually became an unhinged Rogue who, at the end of his administration, had sufficient control to resign.
  • Napoleon was a strange amalgam of an adventurer, idealist, and realist that gives him qualities of a Maverick, Rogue, and a creator of an Establishment (that collapsed with him).  One problem with a leader who rules by force of personality (other examples would be Cromwell and Castro) is that the system they put in place is difficult to sustain after them, thus creating problems of succession.
  • Adolf Hitler was the most pernicious of Rogues. He created and presided over a regime based on an extreme crackpot ideology, ethnic phobia, myths of racial warfare, and bad science. The Weimar Republic before him was a weak and ineffectual Establishment.
  • Fidel Castro was a popular rebel who became a Rogue under the guise of a utopian revolutionary.
  • Josef Stalin and Mao Zedung were pernicious utopian Rogues.
  • Howard Hughes was a good Maverick business type and increasingly psychotic.
  • Preston Tucker was a Good Maverick business type.
  • Jane Jacobs was a Good Maverick as independent intellectual.
  • George Washington was an aristocratic Establishment Type who devised the role of the president and demonstrated a Cincinnatus-like respect for the system by voluntarily relinquishing power at the end of two terms.  His key advisor, Alexander Hamilton, was the prototype American Maverick advisor.
  • Oliver Cromwell was a utopian Rogue and Charles II was a regal Establishment Type (here we can see outlook driving the respective roles). 
  • Otto von Bismarck was a conservative Maverick who created a domestic social welfare state and a military Establishment that only he could control. 
  • Helmutt von Moltke, the Elder, was a military Establishment type who also devised revolutionary ideas within a strict organizational framework.
  • Elihu Root was a Hidden Hand/Opaque Player as well as an Establishment Type.

A Few Words about a Few Words (or: Get Your Neologisms off My Lawn!)

Michael F. Duggan

May Noam Chomsky forgive me for my snobbery.

I know that this stuff is all artificial, but to one degree or another, all wordsmiths are curmudgeons about usage.  I will leave it to others to say whether or not I qualify as a wordsmith, but I certainly have opinions on the use of words. There are people who can discourse at length about why the Webster’s International Dictionary 2nd ed. is superior to subsequent editions (it is), why the Elements of Style is “The Bible” (it is), or why they rely on The Associated Press Stylebook and Libel Manual.  More generally, everybody who writes or reads has favorite and least favorite words and preferred/least preferred usage.  Likewise, some of us have words and usages that are fine in some contexts but insufferable in others. 

There are pretentious neologisms, self-consciously trendy or generational hangnails of usage, unnecessarily technical social science or other academic jargon that has crept into the public discourse (and don’t get me started about hacks like Derrida and Heidegger), and the overuse and therefore the tweaking of existing words.  Below is a partial list of words and phrases that appeal to me in a similar sense as fingernails drawn down a dry chalkboard.  This posting is written in a tone of faux smugness/priggishness and is not intended to be mean, so don’t take it to heart if you have ever used or otherwise run afoul of any of the offending terms. Below that is a slightly hysterical grouse I wrote a year or two ago about the recent appropriation of the word “hipster.” 

Enjoy (if that’s the right word).

  • All you need to know about… Click bait for people who want to know the bullet points of conventional wisdom on a popular or topical issue.
  • Bad Ass. A term once reserved for outlaw bikers, rogue athletes, some convicts, gang members, other criminal and quasi-criminal types, as well as tough guy soldiers, sailors, and marines. Today it is a marginally hip compliment used to describe or encourage someone modestly able to assert himself/herself or whose delicate ego could use a boost. When used as an adjective, it is a more self-dramatizing, mildly profane version of “cool” (see below).
  • Begs the question. This is a term correctly used in logic and forensics to describe an argument or reply that avoids addressing or answering the issue at hand.  Today you will likely hear it on the news meaning something like “frames,” “poses,” “suggests,” or “implies the question…” as in the statement: “The result of today’s election begs the question of whether the nation is suffering from mass psychosis or merely a bizarre cult-like phenomena.”
  • Bucket List. A list of things to cross off in order to know that it is time to die.
  • Cool. First used around the time of WWI, this is a ubiquitous, burned-out synonym for “good” or “desirable” in a context of pop culture conformity. A common term of reverse snobbery indicating approval and therefore social acceptance among “cool” people (including, presumably, the speaker bequeathing approval/acceptance) that is mostly identical to the post-1990s use of the world “hip” (see rant below).  Like “hip,” it was once a rebellions alternative to more conventional terms of approval like “good.” Unless I am describing to a day below 60 degrees, soup that has sat around too long, or a certain kind of modern jazz, I am attempting—mostly unsuccessfully—to wean myself off of this insipid, reflexive word. It is still preferable (and more durable) than the comically dated groovy. There is another, related usage, characterizing a kind of effortless nonchalance and grace in a person usually understood in terms of mass culture desirability and approval.
  • DMV. Madison Avenuesque abbreviation for the “District of Columbia, Maryland, Virginia” region. I associate it with the “Department of Motor Vehicles.” If I ever become hip (modern usage) enough to voluntarily use this term without derision, I hope to be struck by a big Damned Motor Vehicle immediately thereafter. Not actually used by people from the greater Washington, D.C. area.
  • Fetishize. Verb form of fetish: to make something the object of a fetish. An obsession. To abnormally or inappropriately ascribe more importance or interest to a thing than is necessary or deserved. Fetishize is commonly used by people who fetishize words like “fetishize.”
  • Great Recession. A lazy, pseudo-historical term used by pundits in the corporate media to characterize the depression that followed the collapse of 2008 and the economic conditions that persist today throughout much of the country. A recession is a cyclic downturn in the economy; a depression is an economic crisis caused by structural flaws in the economy. The present crisis will continue, even if some economic indicators improve, as long the the structural defects in the economy remain uncorrected. The underlying causes of the crisis of 2008 are still very much in place as of this writing (2019).
  • Hero. A good word, especially when used in discussions on Greek tragedy and in literary discussions generally (e.g. the Byronic Hero, the Hemingway Code Hero, etc.). Also a good word when used sparingly, quietly, and modestly and when it does not command or demand the instant, uncritical adoration or conformity or the surrendering of opinions not shared by the majority (e.g. the silencing of a person who speaks out against a harmful or ill-conceived policy because it might affront the honor and dignity of a person who acted with courage in furtherance of such policy). There is much that is heroic in the human heart and in noble, selfless—especially self-sacrificing—acts that flow from it. In recent years it has been overused in a way that is manipulative or distracting by the corporate media. In the First World War, this kind of usage was derided as “newspaper patriotism” by those who actually served. Literary critic and WWII Marine Corps combat pilot, Sam Hynes refers to this kind of usage as a “windy word.” Ben Franklin writes about the dangers of “the hero” as a historical type. Others have written and spoken thoughtfully of the peril of nations in need of heroes and of the uncritical worship of heroes in a hard, ideological sense.
  • I am passionate about ____. An enthusiastic, youthful, way of emphasizing that one cares about something with a depth of feeling beyond the ordinary. Often heard on job interviews to express breathless eagerness.
  • Icon/Iconic Good words in traditional usages (e.g. medieval religious portraiture).  In the modern popular and media usage, the new meaning is something like: universally emblematic of itself; characterizing the empty husk of a thing or person once fresh, original, and important, now reduced to an instantly recognizable cliché or a symbol mostly drained of any content, substance, or meaning. An image from which all depth and nuance has been sucked out leaving a reflexively recognizable reduction (e.g. Rodon’s The Thinker, da Vinci’s Mona Lisa, and Munch’s The Scream). A complex thing reduced to a symbol or to mass culture banality. Ostensibly a compliment, being called an “icon” is in essence the same as being called a lazy, two-dimensional cliché.
  • Incentivize. To give people an incentive to do something, I suppose.
  • Influencer. Presumably someone with disproportional influence relative to their insight, merit, wisdom, and taste, or lack thereof. Precise meaning non apparent.
  • Is that really a thing? A more diffuse way of saying “Really?” or “Is that something people actually do or believe?”
  • Juxtaposition.  Use sparingly.  Otherwise it suffers from some of the complaints against “paradigm” (see below).
  • Look. A word used by pundits on political talk shows before or at the beginning of a sentence for no apparent reason.
  • My Bad. An efficient, if ungrammatical, mea culpa for a minor infraction.
  • Miracle. A term of faith cynically and shamelessly appropriated by the media to describe an event (usually an accident or disaster) where survival or a happy outcome was dramatic, surprising, or unlikely, but well within the realm of possibility without divine intervention.
  • Narrative. A term borrowed from literary criticism and academic history departments meaning a particular ideological or personal explanation, interpretation, or version.  Often used to cast doubt on or call into question an interpretation by implying a self-serving or subjective account (or that there are no “objective” accounts).  Instead of “narrative,” I prefer “interpretation” as a less loaded alternative.  Explanations should be examined for their truth content and not dismissed solely because of a presumed perspective or the inferred state of mind of the narrator (an error of analysis known as psychologism).
  • No worries. This term obviously means “Don’t worry about it” or “No big deal/no problem.” It was appropriated from the Aussies around or just before the turn of the twenty-first century. Do not use unless you are Australian and only if followed by “mate.”
  • Paradigm/Paradigm Shift/Paradigmatic. A term that crept out of the philosophy of science of Thomas Kuhn.  A favorite termof hack academics and others trying to sound smart (see “juxtaposition”).  Outside specific academic usage, one should probably avoid this word altogether (and even when writing technically, “frame” or “framework” are less pretentious and distracting).  If someone puts a gun to your head and commands you to use the adjective form, try “paradigmic” (parr-uh-dym’-ik)  I don’t know whether or not it is a real word, but it still sounds better than “paradigmatic,” arguably the most offensive word in modern English (and your example might help start a trend for others under similar duress).
  • Reach[ing/ed] out to… Just call the guy; reaching out to him doesn’t make you a better person any more than “passing away ” makes you any less dead than someone who has simply died.
  • So, … A horrible word when said slowly and pronounced “Sooo…” at the beginning of a spoken paragraph or conversation or when starting to answer a question.  An introductory pause word common among people born after 1965. It is a word that allows user to sound both didactic and flaky at the same time. A person who uses “So…” this way throughout all but the shortest of conversations can make some listeners from previous generations want to throw a heavy object at the nearest wall.
  • Snarky. An old term that came back in the 1990s. Just a weaker and less efficient (two-syllable) way of saying “snide.”
  • Society. A decomposing whale carcass left by the tide at the mean water mark, thus denoting a certain time and place. Although silent, it is depicted either as malodorous or once-great. The mean of dominant opinion, mores, and public opinion.
  • Spiritual/Spirituality. A word commonly (and confidently) thrown down as a solemn trump card in discussions on metaphysics but which means nothing more than a vaguer form of “religiosity” without a commitment to specific beliefs and obligations. It is a word that allows the speaker to elevate him/herself above the conformist throng of the more conventionally faithful and makes him/her seem deeper, more individualistic, and mysterious to the unwary.
    It is also an ill-defined projection of a speaker’s personality into the realm of metaphysics. It is the result of someone (often an adolescent) who wants to believe in something otherworldly when exiting belief systems are found wanting, implausible or are unacceptable whole cloth. An imprecise word whose imprecision gives it a false authority or gravitas when any number of more precise words from philosophy, psychology, or theology would suffice (e.g. animism, cosmology, deism, epiphany, exaltation, inspiration, pantheism, neo-paganism, theism, transcendentalism, and New Agey cults and religions, etc.). Although the definition of words is seldom important in good faith critical discussions, one should always ask for a concise definition of spirituality whenever it comes up in conversation. Note: there may be a narrow context or range of usage where this word is appropriate, such as referring to a priest or minister as a spiritual advisor.
  • Please Talk About... A favorite, if inarticulate, invitation of radio and television interviewers with insufficient knowledge or information to ask actual questions of an expert guest, thus allowing interviewees to spin things in a way that is favorable to their perspective (e.g. “Your company is responsible for the recent catastrophic oil spill that has killed all of the marine life in the region. Talk about the safety precautions it has put in place since the spill.”).
  • Text. A noun meaning a written work or a portion of writing.  It is pretentious as hell, and I believe an inaccurate word.  Human beings do not read text. We read language.
  • Thinking outside of the box. An inspirational “inside the box” cliche expressing a good idea. Not being bound by a limiting conventional framework (or, in the narrow and correct usage in science/philosophy of science, a paradigm). Science progresses by advancing to a point where it smashes the existing frame (e.g. Special and General Relativity superseding the Newtonian edifice in the early twentieth-century). Ironically, this term is often used by conventionalist businessmen/women who somehow think of themselves as mavericks and innovators. A term favored by motivational speakers, leaders of focus groups, and other manic careerist types and their adjuncts.
  •  To be sure. A common infraction even by important historians, social commentators, and novelists when conceding a point they consider to be unimportant to the validity of their overall argument (usually at the start of a paragraph).  No less a writer than Henry Miller has succumbed to “to be sure.” It was fine in Britain 100-150 years ago, but is hard to stomach today because of its confident overuse and how it strikes the ear as old fashioned. Consider instead: “Admittedly,” “Certainly,” “Of course,” “Albeit” (sparingly), and other shorter and less smug-sounding terms. It is still an acceptable mainstay of pirate talk however, and, to be sure, one can easily imagine its use by Wallace Beery as Long John Silver in the 1934 movie version of Treasure Island. International Talk like a Pirate Day is September 19.
  • Trope. An okay word that is overused to disparagingly characterize an overused story. Use it perhaps three times in your life.
  • You as well. A less efficient way of saying “You too.” A classic illustration of middle class “syllable multiplication” (see Paul Fussell’s Class). I think people use this to add variety to their usage rather than rely solely on the less satisfying “You too.” Unconsciously, people might think that a simple sentiment may be made somehow more interesting by expressing it with more words/syllables (e.g. using “indeed” rather than “yes” in simple agreement). In a similar sense, syllable multiplication gives the illusion of adding content. A similar phenomenon is the pronunciation of some multi-syllable words with emphasis on the last syllable, giving the impression of two words. (e.g. “probably” spoken as “prob-ub lee'” with emphasis on the suffix).
  • You’re very welcome. An in-your-face, parrot or mirror-like reply to “Thank you very much.”* Common among people under 40, it may be used earnestly, reflexively, or to mock what the young perceive to be the pretentious hyperbole of older people who have the unmitigated gall to add the intensifier “very” when a simple “thank you,” “thanks, ” or understated nod would suffice. Even in a time when “very” is very much overused, one should take any sincere variation of “Thank you” for how it was intended—as a gift of civility and etiquette freely offered—and a mocking or mildly snarky reply of “you’re very welcome” is at least as smug as this blog posting. *Note: The word very should never be used in writing as an intensifier (there are some acceptable usages such as “by its very nature”).
  • Weaponize. To give something an added function by making it into a weapon or something to be turned against another person (e.g “She effectively weaponized the stapler by throwing it at him”).

Finally, there is a much-maligned word that I would like to resurrect or at least defend: Interesting. If used as a vague and non-committal non-description or non-answer, it should be avoided unless one is forced into using it (e.g. when one is compelled by circumstances to proffer an opinion or else be rude or lie outright; in this capacity, the guarded “interesting” never fools anybody and is usually interpreted as a transparent smokescreen for a negative opinion). However, for people who like ideas and appreciate the power and originality of important concepts, “interesting ” can be used as an understated superlative—a quiet compliment, a note of approval or admiration that opens a door to further explanation and elaboration.

Essay: On the Hip and Hipsters

Present rant triggered by a routine stop at a coffee shop. 

I appreciate that language evolves, that the meanings of words change, emerge, evolve, disappear, diverge, procreate, amalgamate, reemerge, splinter-off, become obscure, and overshadow older meanings, especially in times of rapid change.  I am less sanguine about words that seem to be appropriated (and yes, I know that one cannot “steal” a word) from former meanings that still have more texture, resonance, authenticity, and historical context for me in their original usage.

For example over the past decade, and probably going back to the 1990s, the word “hipster” has taken on a new, in some ways inverse, but not unrelated meaning to the original. My understanding of the original meaning of “hipster” was a late 1930s-1950s blue collar drifter, an attempted societal drop-out, a modernist descendant of the romantic hero, and borderline antisocial type who shunned the “phoniness” of mainstream life and commercial mass culture and trends and listened to authentic (read: African-American) jazz—bop—(think of Dean Moriarty from On the Road).1 

He/she was “hip” (presumably an evolution of 1920s “hep”)—clued-in, disillusioned—to what was really going on in the world behind the facades and appearances. This meaning stands in contrast to today’s idea of “hip” as being in touch with current trends—an important distinction. The hipster presaged the beat of the later 1950s who was more cerebral, contrived, literary, and urban. In the movies, the male of the hipster genera might have been played by John Garfield or Robert Mitchum. In real life, Jackson Pollock will suffice as a representative example. Hipsters were typically flawed individuals and were often irresponsible and failures as family people. But at least there was something authentic and substantial about them as an intellectual type.

By contrast, today’s “hipster” seems to be self-consciously affected right down to the point of his goateed chin: consciously urban (often living in gentrified neighborhoods), consciously fashionable and ahead of the pack, dismissive of non-hipsters (and quiet about his/her middle-to-upper-middle class upbringing in the ‘burbs and an ongoing childhood once centered around play dates), a conformist to generational chauvinism, clichés, and dictates.  Today’s hipster embodies the calculation and trendiness that the original hipsters specifically stood against (they were noticed, not self-promoted).  Admittedly, hip talk was adopted by the Beats and later cultural types and elements of it became embedded in the mainstream and then fell out of favor. Today it seems affected and corny (as Hemingway observes “…the most authentic hipster talk of today is the twenty-three skidoo of tomorrow…”).2

I realize that this might sound like a “kids these days” grouse or reduction—and I hope it is not; upon the backs of the rising generation ride the hopes for the future of the nation, our species, and the world. I have known many young people—interns and students—the great majority of whom are intelligent, serious, thoughtful, and oriented toward problem solving and social justice. There is also a strong current toward rejecting the trends of previous generations among them. The young people these days have every right to be mad at what previous generations have done to the economy and the environment and perhaps the hipsters among them will morph into something along the lines of their earlier namesake or something better.

If not, then it is likely that the word will continue to have a double meaning as the original becomes increasingly obscure or until another generation takes it up as its own.

  1. For the best analyses and commentary on the original meaning of “hip” and “hipster,” see Norman Mailer’s “The White Negro,” “Reflections on the Hip,” “Hipster and Beatnik,” and “The Hip and the Square” in Advertisements for Myself.
  2. See “The Art of the Short Story,” in The Short Stories of Ernest Hemingway, Hemingway Library Edition, 2.