A Few Words on a Few Words (or “Hey You Kids: Get Your Neologisms off My Lawn!”)

Michael F. Duggan

At one level or another, every wordsmith is a curmudgeon about usage.  I will leave it to others to determine whether or not I qualify as a wordsmith, but it is certainly not beyond me to be a curmudgeon on some topics. There are people who can discourse at length about why the Webster’s International Dictionary 2nd ed. is superior to previous and subsequent editions, or why the Elements of Style is “The Bible.”  More generally everybody who writes or reads has favorite and least favorite words and preferred/least preferred usage.  Similarly, some of us have words and usages that are fine in some contexts but insufferable in others.  

There are pretentious neologisms, self-consciously trendy or generational hangnails, unnecessarily technical social science or other academic jargon that has crept into the public sphere (don’t get me started about Derrida and Heidegger), and the overuse and therefore the tweaking of existing words.  Below is a partial list of words and phrases that appeal to me in a similar sense as fingernails on a chalkboard.  This posting is written in a tone of faux smugness/priggishness and is not intended to be mean, so please do not take it to heart if you have ever run afoul of any of the offending terms. Below that is a slightly hysterical rant/grouse/essay I wrote a year or two ago about the recent appropriation of the word “hipster.” 

Enjoy (if that’s the right word).

  • All you need to know about… Click bait for people who want to know the bullet points on a popular or topical issue.
  • Begs the question. This is a term correctly used in logic and forensics to describe an argument or reply that avoids addressing or answering the issue at hand.  Today you will likely hear it on the news meaning something like “suggests,” “poses,” or “implies the question…” as in the statement: “The result of today’s election begs the question of whether the nation is suffering from mass psychosis.”
  • Cool. A ubiquitous, burned out synonym for “good” or “desirable” in a context of modern pop culture conformity.  A popular term of reverse snobbery indicating approval and therefore social acceptance among “cool” people (including the speaker) that is mostly identical to the post-1990s use of the world “hip” (see rant below).  Like “hip,” it was once a rebellions alternative to older terms of approval. Unless I am describing to a day below 60 degrees, soup that has sat around too long, or a certain kind of modern jazz, I am attempting—mostly unsuccessfully—to wean myself off of this insipid, reflexive word. It is still preferable (and more durable) than the more dated groovy.
  • DMV. Local Madison Avenue-esque abbreviation for the “District of Columbia, Maryland, Virginia” region. I think of it as representing the “Department of Motor Vehicles.” If I ever become hip (modern usage) enough to voluntarily use this term, I hope that I will be struck by a large Motor Vehicle immediately thereafter.
  • Fetishize. Verb form of fetish—to make something the object of a fetish. To abnormally or inappropriately ascribe more importance or interest to a thing than is necessary or deserved. Fetishize is commonly used by people who fetishize words like “fetishize.”
  • Icon/Iconic. Perfectly good words in traditional usages (e.g. medieval religious portraiture).  In the modern popular and corporate media, the new meaning is something like “A thing or person once fresh, original, and important, now reduced to an instantly recognizable cliché or a symbol mostly drained of any content, substance, or meaning.”
  • I’m a survivor. A perfectly good term, but only if volunteered modestly (i.e. not as a boast) and if the user has survived a cataclysmic event.
  • Juxtaposition.  Use sparingly.  Otherwise it suffers from some of the complaints against “paradigm.”
  • Narrative. A term borrowed from literary criticism and academic history meaning a particular ideological or personal explanation or interpretation.  Often used to disparage an interpretation by implying a self-serving, or subjective account (or that there are no “objective” accounts).  Instead of “narrative,” I prefer “interpretation” as a more neutral alternative.  Explanations should be examined for their truth content and not dismissed solely because of an implied perspective or the implicit state of mind of the narrator (an error of analysis known as psychologism).
  • No worries. This term obviously means “Don’t worry about it” or “No big deal/problem.” Appropriated from the Aussies around or just before the turn of the twenty-first century. Do not use unless you are Australian and only if followed by “mate.”
  • Paradigm/Paradigm Shift/Paradigmatic. A term that crept out of the philosophy of science of Thomas Kuhn (and a variation on ideas of Karl Popper and others).  A favorite word of hack academics and others trying to sound smart (see “juxtaposition”).  Outside of very specific academic usage, one should probably avoid this word altogether (and even when writing technically, “frame” or “framework” are less pretentious and distracting).  If a person puts a gun to your head and commands you to use the adjective form, try “paradigmic” instead.  I don’t know whether or not it is a real word, but it is still better than “paradigmatic,” arguably the most offensive word in modern English.  And you might help start a trend for others under similar duress.
  • Privileged/Privilege. A term and variations that are useful for instilling guilt-on-demand in rich liberals, provoke an embarrassing, ham-handed defensive response from “the haves” in general, or used simply as a mild veiled threat of the possibility of all-out class warfare (thank goodness for the “privilege”of the Roosevelts, and John and Robert Kennedy). It is not clear how or to what degree this term applies to unemployed white autoworkers, the dozens of daily opioid overdoses in places like Huntington, WV or Pottsville, PA, recent college grads with few employment prospects above the introductory level of the fast food service industry, or the downwardly-mobile former working middle class in general. It is wise to tread lightly around this divisive term in times when national unity is scarce.  The party that used identity as a basis for strategy did not fare well in 2016.  OF course it is always best not to reduce people to generalized categories of race and sex. Most Americans are “privileged” by world standards, so this term can easily be turned against almost anyone who uses it in this country.
  • Reach[ing/ed] out to… Just call the guy; reaching out to him doesn’t make you a better person any more than a person who has “passed away” is any less dead than someone who has simply died.  
  • So… A horrible word when said slowly and pronounced “Sooo…” at the beginning of a spoken paragraph or conversation.  An introductory pause word common among people born after 1965. A person who uses “So…” this way throughout all but the shortest of conversations can make some listeners from previous generations want to throw a heavy object at the nearest wall.
  • Spiritual/Spirituality. A word commonly (and confidently) thrown down as a solemn trump card in discussions on metaphysics but which means nothing more than a vaguer form of “religiosity” without a commitment to specific beliefs. An ill-defined projection of a speaker’s personality into the realm of metaphysics. The result of one who wants to believe in something otherworldly when exiting belief systems are found wanting or are unacceptable whole cloth. An imprecise word whose imprecision gives it a false authority or gravitas when any number of more precise words from philosophy, psychology, or theology would suffice (e.g. animism, cosmology, deism, epiphany, exaltation, inspiration, pantheism, paganism, theism, transcendentalism, and the names of specific religions, etc.). Although the definition of words is seldom important in good faith critical discussion, one should always ask for a concise definition of spirituality whenever it comes up in conversation. Note: there may be a narrow context or range of usage where this word is appropriate, such as referring to a priest or minister as a spiritual advisor.
  • Talk About. A favorite, if inarticulate, invitation of radio and television interviewers with insufficient knowledge or information to ask actual questions, thus allowing interviewees to pin things in a way that is favorable to them.
  • Technocrat. The problem with this term is that like “hipster” (again, see below), it has two related but substantially different meanings. To those under 40, it typically refers to a person belonging to technical or technological elite who are blind to all but technological solutions to all of the nation’s and the world’s problems. As such it is a perfectly good–if overused–term of derision against an arrogant class. The issue I have is that there is an older definition meaning simply a specialized public servant. If Benjamin Cohen, Thomas Corcoran, Harry Hopkins, Harold Ickes, George F. Kennan, John McCloy, George C. Marshall, and Frances Perkins are “technocrats,” then I have nothing but admiration for many people covered by this older usage.
  • Text. A noun meaning a work or a portion of writing by a given author.  It is pretentious as hell, and I believe an inaccurate word.  Human beings do not read text; we read language.
  • Thinking outside of the box. An inspirational “inside the box” cliche expressing a good idea: not being bound by a an limiting conventionalist framework (or, in the narrow and correct usage in science/philosophy of science, a paradigm). Science progresses by advancing to a point where it smashes the existing frame (e.g. Relativity superseding the Newtonian edifice in the early twentieth-century). Ironically, this term is often used by conventionalist businessmen/women who somehow think of themselves as mavericks and innovators.
  •  To be sure. A common infraction among even important historians and social commentators when conceding a point they consider to be unimportant to their overall argument (usually at the start of a paragraph).  It was fine in Britain 100-150 years ago, but is hard to stomach today because of severe overuse.  Consider instead: “Admittedly,” “Certainly,” “Of course,” “Albeit” (sparingly), and other shorter and less pretentious terms.
  • Trope. An okay word that is overused.
  • You’re very welcome. A mirror reply to “Thank you very much.” Common among people under 40, it may be used earnestly, reflexively, or to mock what the young perceive to be the pretentious hyperbole of older people who have the unmitigated gall to add the intensifier “very” when a simple “thank you” or even “thanks ” would suffice. Even in a time when “very” is very much overused, one should take any sincere variation of “thank you” for how it was intended—as a gift of civility and etiquette freely offered—and a mocking or mildly sarcastic reply of “you’re very welcome” is at least as smug as this blog posting.

Finally, there is a much-maligned word that I would like to resurrect or at least defend: Interesting. If used as a vague and non-committal non-description, it should be avoided unless one is forced into using it (e.g. when one is compelled by circumstances to proffer an opinion when one does not like something; in this capacity, the use of this word never fools anybody). However, for people who like ideas and appreciate the power and originality of important concepts, “interesting ” can be used as an understated superlative—a quiet compliment that opens a door to further explanation and elaboration.

Essay: On the Hip and Hipsters

Present rant triggered by a routine stop at a coffee shop. 

I appreciate that language evolves, that the meanings of words change, emerge, disappear, diverge, procreate, amalgamate, splinter-off, become obscure, and overshadow older meanings, especially in times of rapid change.  I am less sanguine about words that seem to be appropriated (and yes, I know that one cannot “steal” a word) from former meanings that still have more texture, resonance, authenticity, and historical context for me.

For example over the past decade (1990s?) the word “hipster” has taken on a new—in some ways inverse—but not unrelated meaning to the original. The original meaning (to my knowledge) of “hipster” was a late 1930s-1950s blue collar drifter, an attempted societal drop-out, a modernist cousin of romantic hero, and borderline antisocial type, who shunned the “phoniness” of mainstream life and commercial mass culture and trends and listened to authentic (read: African-American) jazz—bebop—(think of Dean Moriarty from On the Road). 

He/she was “hip” (presumably an offshoot of 1920s “hep”)—clued-in, disillusioned—to what was really going on in the world behind the facades and appearances (and not today’s idea of “hip” as being in touch with current trends—an important distinction). The hipster presaged the beat of the later 1950s who was more cerebral, contrived, literary, and urban. In the movies, the male of the hipster genera might have been played by John Garfield or Robert Mitchum. In real life, Jackson Pollock will suffice as a representative example. Hipsters were typically flawed individuals and were often irresponsible and failures as family people. But at least there was something authentic about them.

By contrast, today’s “hipster” seems to be self-consciously affected right down to the point of his goateed chin: consciously urban (often living in newly gentrified neighborhoods) consciously fashionable and ahead of the pack, dismissive of non-hipsters (and quiet about his/her middle-to-upper-middle class upbringing in the ‘burbs and an ongoing childhood once centered around play dates), a conformist to his generational dictates.  Today’s hipster embodies the calculation and trendiness that the original hipsters stood against (they were noticed, not self-promoted). 

I realize that this might sound like a “kids these days” grouse or reduction—and I hope it is not; upon the backs of the rising generation ride the hopes for the future of the nation, species, and the world. I have known many young people–interns and students–the great majority of whom are intelligent, serious, thoughtful, and oriented toward problem solving and social justice. There seems to be a strong current toward rejecting the trends of previous generations among them. The young people these days have every right to be mad at what previous generations have done to the economy and the environment and perhaps the hipsters among them will morph into something along the lines of their earlier namesake or something considerably better.

If not, then it is likely that the word will continue to have a double meaning as the original becomes increasingly obscure or until another generation takes it up as its own.

The Wisdom and Sanity of Andrew Bacevich

Book Review

By Michael F. Duggan

Andrew J. Bacevich, Twilight of the American Century, University of Notre Dame Press, 2018.

What do you call a rational man in an increasingly irrational time?  An anomaly?  An anachronism?  A voice in the wilderness?  A faint glimmer of hope? 

For those of us who devour each new article or book by the prolific Andrew J. Bacevich, his latest book Twilight of the American Century—a collection of his post-9/11 articles and essays (2001-2017)—is not only a welcome addition to the oeuvre but something of an event.  In these abnormal times, Bacevich, a former army colonel who describes himself as a traditional conservative, is nothing short of a bomb-thrower against the the Washington Consensus.  Likewise the ominous title of the present collection does not look out of place among the apocalyptic titles of a New Left history professor (Alfred W. McCoy/In the Shadows of the American Century), an apostate New York Times journalist flirting with bottom-up Marxism (Chris Hedges/America the Farewell Tour), and an economics professor from Brandeis (Robert Kuttner/Can Democracy Survive Global Capitalism). 

The new book was worth the wait.    

A collection by an author with broad, deep, and nuanced historical understanding, Twilight of the American Century lends powerful insight over a wide territory of issues, events, and personalities.  The brevity of these topical pieces makes it possible to pick up the book at any point or to jump ahead to areas of special interest to the reader.  Bacevich, a generalist with depth and a distinctive voice, offers what is without a doubt the freshest and most sensible take on foreign policy and military affairs today.

In terms of outlook, Professor Bacevich harkens back to a time when “conservatism” meant Burkean gradualism—a cautious and moderate outlook advocating terraced progress over the jolts and whipsaw of radical change and destabilizing shifts in policy.  This perspective is based on a realistic understanding of human nature, that people are flawed and that traditions, the law, strong government, and the balancing of power are necessary to accommodate—to contain and balance—the impulses of a largely irrational animal and what Peter Viereck called its “Satanic ego.”  

As regards policy, traditional (read “true”) conservatism is fairly non-ideological.  It holds that rapid fundamental change results in instability and eventually violence.  Those who have studied utopian projects or events like the Terror of the French Revolution, the Russian Revolution, or the Cultural Revolution realize that this perspective might be on to something.  Traditional conservatives like Viereck, believe that a nation should keep those policies that work while progressing gradually in areas in need of reform.  They also embrace progressive initiatives when they appear to be working or when a more conservative approach is insufficient (Viereck supported the New Deal).  The question is whether or not gradualistic change is even possible in a time of great division in popular politics and lockstep conformity and conventionalism among the members of the Washington elite. 

From his shorter works as well as books like The Limits of Power, Washington Rules, and America’s War for the Greater Middle East (to name a few) one gets two opposite impressions about Bacevich and his perspective.  The first is that he never abandoned conservatism, it abandoned him and became something very different—a bellicose radicalism of the right—that is odious to true conservatives.  The second is more personal, that, like a hero from the Greek tragic tradition, he realized in midlife that what he believed to be true was wrong.  At the beginning of his brutally honest and introspective introduction to the present book, he writes:

“Everyone makes mistakes.  Among mine was choosing at age seventeen to attend the United States Military Academy, an ill-advised decision made with little appreciation for any longer-term implications that might ensue.  My excuse?  I was young and foolish.”

The implication of such a stark admission is that when one errs so profoundly, so early in life, it puts everything that follows on a mistaken trajectory.  While this seems to be tragic in the classical sense (and is certainly “tragic” in more common usage as a synonym for catastrophic), it also appears to be what has made Bacevich the powerful critic he has become: to the wise, truth comes out of the realization of error.  His previous “erroneous” life gives him a template of uncritical assumptions against which to judge the insights hard bought through experience and independent learning after he arrived at his epiphany, his moment of peripetia.  The “mistake” (more like an object lesson of harsh self-criticism) and the realizing of it with clarity of vision and disillusioned historical understanding made him the superb critic he has become (and to be frank, his career as an army combat officer gives him certain “street creds” that cannot be easily dismissed and which he could not have earned elsewhere).  It seems unlikely that Bacevich would have happened on his current perspective as just another academic.  

One can only speculate about whether or not he makes the truth of his early “error” out to be more tragic than it really is.  A more charitable reading is that this admission casts him as the hero in a Popperian success story of one who has taken the correct lessons from his experience.  One can hardly imagine a more fruitful intellectual rising from a midlife crisis.  It is also difficult to imagine how he would have arrived at his depth as a mature commentator via a more traditional academic route.  But I draw close to psychologizing my subject.

In order to be a commentator of the first rank, a writer must know human nature—its attributes as a paragon among animals, its foolishness, its willfulness, its murderous irrationality—and must have judgment and a sense of circumspect that comes from historical understanding.  You must know when to criticize and when to forgive, lest you become mean.  Twain was a great commentator because he forgives foibles while telling the truth.  Mencken is sometimes mean because he does not always distinguish between forgivable failing or weakness and fault and excuses himself from his spot-on criticism of others. 

An emeritus professor at Boston University, Bacevich knows history as well as any contemporary public intellectual and much better than most.  His historical understanding far exceeds that of the neocon/lib critics and policymakers of the Washington foreign policy Blob.  He carries off his criticism so effectively, not by a lightness of touch, but by frank honesty.  It is apparent from the first line of the book that he holds himself to the same standards and one senses that he is his own toughest critic—his introduction is self-critical to the point of open confession.  Bacevich is tough, but he is one of those rare people who is able to keep himself unblinkingly honest by not exempting himself from the world’s imperfections. 

He dominates polemics then, not by raising his voice, but by reason and clear vision, sequences of surprising observations and interpretations that expose historical mythologies, false narratives, and mistaken perceptions, with an articulate and nuanced, if at times dour voice.  Frank to the point of bluntness, he calls things by their proper name and has what Hemingway called “the most essential gift for a good writer… a built-in, shockproof, bullshit detector” the importance of which goes double if the writer is a historian.  In less salty language, and in a time where so many commentators tend to defend questionable positions, Bacevich’s articles are a tonic because he simply tells the truth. 

In his review of Frank Costigliola’s The Kennan Diaries, he seems to flirt with meanness and overkill, but perhaps I am being oversensitive.  Like many geniuses—assuming that he is one—Kennan was a neurotic and eccentric, and it is all-too easy to enumerate his many obvious quirks (if we judge great artists, thinkers, and leaders by their foibles and failures, one can only wonder how Mozart, Beethoven, Byron, van Gogh, Churchill, Fitzgerald, and Hemingway would fare; even the Bard would not escape whipping if we judge him by Henry VIII).  As a shameless Kennan partisan who tends to rationalize his personal flaws, perhaps I am just reacting as one whose ox is being gored.  I am not saying that Bacevich gets any of the facts wrong, only that the interpretation lacks charity.  

This outlining of Kennan’s shortcomings also struck me as ironic and perhaps counterproductive in that Bacevich is arguably the closest living analog or successor to Mr. X. as a commentator on policy, both in terms of a realistic outlook and in the role of historian as a Cassandra who is likely to be right and unlikely to be heeded by the powers that be.  Both fill the role(s) of the conservative as moderate, liberal-minded realist, historian as tough critic, and critic as honest broker in times desperately in need of correction.  As regards temperament, there are notable differences between the two: Bacevich strikes one as a stoical Augustinian Catholic where Kennan, at least in his diaries, comes across as a Presbyterian kvetch and perhaps a clinical depressive.  Like Kennan too, Bacevich is right about many—perhaps most—things, but not about everything; perfection is too much to ask of any commentator and we should never seek out spotless heroes.  The grounded historical realism and clear-sighted adumbrate of both men is immune to the seduction of bubbles a la mode, the conventionalist clichés of neoliberalism and neoconservatism.

The book is structured into four parts: Part 1. Poseurs and Prophets, Part 2. History and Myth, Part 3. War and Empire, and Part 4. Politics and Culture.  The first part is made up of book reviews and thumbnail character studies.  If you have any sacred cows among the chapter titles or in the index, you may find your loyalty strongly tested and if you have anything like an open mind, there is a reasonable chance that your faith will be destroyed.  Charlatans and bona fide villains as well as mere scoundrels and cranks including the likes of David Brooks, Tom Clancy, Tommy Franks, Robert Kagan, Donald Rumsfeld, Arthur Schlesinger, Paul Wolfowitz, Albert and Roberta Wohlstetter, and, yes, George Kennan, all take their lumps and are stripped of their new clothes for all to see.  Throughout the rest of the book there is a broad cast of characters that receive a similar treatment.  

This is not to say that Bacevich does not sing the praises of his own chosen few including Randolph Bourne, Mary and Daniel Beard, Christopher Lasch, C. Wright Mills, Reinhold Niebuhr, and William Appleman Williams, but here too is he completely frank and provides a full list of favorites up front in his introduction (his inclusion of the humorless, misanthrope, Henry Adams—another Kennan-like prophet, historian, and WASPy whiner—is a little perplexing).   

Where to begin?  Bacevich’s essays are widely ranging and yet embody a consistent outlook.  Certain themes overlap or repeat themselves in other guises.  He has a Twain-like antipathy for frauds, fakes, and charlatans and is adept at laying bare their folly (minus Twain’s punchlines and folksy persona).  The problem with our time is that these people have dominated and their outlooks have become an unquestioned orthodoxy among their followers and in policy circles in spite of a record of catastrophe that promises more of the same.  To read Bacevich’s criticism is to realize that things have gone beyond an establishment wedded to an ideology of mistaken beliefs and into the realm of group psychosis.  One comes away with the feeling that the establishment of our time has become a delusional cult beyond the reaches of reason and perhaps sanity.  Hume reminds us, that “reason is the slave of the passions” and it is striking to read powerful arguments that are unlikely to change anything.  If anything, Bacevich’s circumspect, clarity of vision, common sense, and impressive historical fluency seem to disprove the observation attributed to Desiderius Erasmus that “in the land of the blind, the one-eyed man is king.”  More likely, in a kingdom of the blind, a clear-sighted person will be ignored or burned as a heretic if caught.

Are there any criticisms of Bacevich himself?  Sure.  For instance, one wonders if, like a gifted prosecutor, at times he makes the truth out to be clearer than it may really be.  In this sense his brilliant Washington Rules is a powerful historical polemic rather than a purely interpretive survey (like Robert Dalleck’s The Lost Peace, which covers much of the same period).  Thus it is fair to regard him as a polemicist as well as an interpretive historian (again, this is not to suggest that he is wrong).  Also, given the imminent threat posed by the unfolding environmental crises, I found myself hoping that he would wade further into topics related to climate change—the emerging Anthropocene (i.e. issues of population, human-generated carbon dioxide, loss of habitat/biodiversity, soil depletion, the plastics crisis, etc.)—and wondering how he might respond to commentators like John Gray, Elizabeth Kolbert, Jed Purdy, Roy Scranton, Edward O. Wilson, and Malthus himself. 

The only other criticism is that Bacevich is so prolific that one laments not finding his most recent articles among the pages of the present collection.  This is what is known as a First World complaint.

Unlike a singular monograph, there is no one moral to this collection but a legion of lessons: that events do not occur in a vacuum—that events like Pearl Harbor, the Cuban Missile Crisis, and 9/11, and the numerous U.S. wars in the Near East all had notable pedigrees of error—and that bad policy in the present will continue to send ripples far into the future; that the stated reasons for policy are never the only ones and often not the real ones; that some of the smartest people believe the dumbest things and that just because you are smart doesn’t necessarily mean that you are sensible or even sane; that the majority opinion of experts is often wrong; that bad arguments sometimes resonate broadly and masquerade as good ones and that without a nuanced understanding of history it is impossible to distinguish between them.  If there is a single lesson from this book it is that the United States has made a number of wrong turns over the past decades that have put it on a perilous course on which it continues today with even greater speed.  Thus the title. 

In short, Bacevich, along with Barlett and Steele, and a number of other commentators on foreign policy, economics, and the environment, is one of the contemporary critics whose honesty and rigor can be trusted.  As a matter of principle, we should always read critically and with an open mind, but in my experience, here is an author whose analysis can be taken as earnest, sensible, and insightful.  He is also a writer of the first order.

My recommendation is that if you have even the slightest feeling that things are amiss in American foreign affairs, or if you are simply earnest about testing the validity of your own beliefs, whatever they are, you should read this book.  If you think that everything is fine with the nation and its policy course, then you should buy it today and read it cover to cover.  After all, there is nothing more dangerous than an uncritical true believer and we arrive at wisdom by correcting our mistaken beliefs in light of more powerful arguments to the contrary.  

A Wonderful Life?

By Michael F. Duggan

 For the past few years, I have posted a version of this essay around this time of year.  Having just watched the movie last night, here it is again.  

I have always loved the 1947 Frank Capra seasonal classic It’s a Wonderful Life, but have long suspected that it is a sadder story than most people realize (in a similar but more profound sense as Goodbye Mr. Chips).  One gets the impression from the early part of the movie that George Bailey could have done anything, but was held back at every opportunity.  Last year, after watching it, I tried to get my ideas about the film organized and wrote the following essay.

In spite of its heart-warming ending, the 1947 Christmas mainstay by Frank Capra, It’s a Wonderful Life, is in some ways a highly ambiguous film and likely a sad story. George Bailey, the film’s protagonist played by Jimmy Stewart (in spite of his real-life Republican leanings), is the kind of person who gave the United States it’s most imaginative set of political programs from 1933 to 1945 that shepherded the country through the Depression and won WWII and consequently its greatest period of prosperity from 1945 until the early 1970s (for a real life sample of this kind or person, see The Making of the New Deal: The Insiders Speak). Bailey wants to do “something big and something important”—to “build things” to “plan modern cities, build skyscrapers 100 stories high… bridges a mile long… airfields…” George Bailey is the big thinker—a “big picture guy”—and his father, Peter Bailey the staunch, sensible, and fundamentally decent localist hero. Both are the kind of people we need now.

In a moment of frank honesty bordering on insensitivity, George tells his father that he does not want to work in the Building and Loan, that he “couldn’t face being cooped up in a shabby little office… counting nickels and dimes.”  His father recognizes the restlessness, the boundless talent and quality, the bridled energy, big-thinking, and high-minded ambition of his son.  Although wounded, the senior Mr. Bailey agrees with George, saying “You get yourself and education and get out of here,” and dies of a stroke the same night—his strategically-placed photo remains a moral omnipresence for the rest of the movie (along with presidential photos to link events to specific years).

One local crises or turn of events after another stymies all of George’s plans to go abroad and change the world just as they seem to be on the cusp of fruition. Rather than world-changer, he ends up as a local fixer for the good—a better, and more energetic version of a local hero, a status that confirms his “wonderful life” at the film’s exuberantly sentimental ending where a 1945 yuletide flash mob descends on the Bailey house thus saving the situation by returning decades worth of good faith, deeds, and subsequent material wealth and prosperity.  But what is it that sets George apart from the rest of the town that comes to depend upon him over the years?

At the age of 12 he saves his brother Harry from drowning (and by historical extension, a U.S. troopship a quarter of a century later), leaving him deaf in one ear.  Shortly thereafter, his keen perception prevents Mr. Gower, the pharmacist (distracted by the news of the death of his college student son during the Spanish Flu pandemic of 1918-1919), from accidentally poisoning another patient.  As an adult, George’s theorizing about making plastics from soybeans by converting a local defunct factory adds to the town’s prosperity and makes a less visionary friend (Sam “hee-haw” Wainwright) a fortune, but not one for himself.

Other than saving the Building and Loan from liquidation, George’s primary victory is marrying his beautiful and wholesome sweetheart—”Marty’s kid sister”—Mary (Donna Reed) and raising a family.  With a cool head and insight and the help of his wife, they single-handedly stop a run on the Building and Loan in its tracks with their own readily-available honeymoon funds.  The goodwill is reciprocated by most of the Savings and Loan’s investors (one notably played by Ellen “Can I have $17.50” Corby, later Grandma Walton).

From there George goes on to help an immigrant family buy their own house and in fact builds an entire subdivision for the town’s earnest and respectable working class, all the while standing up to the local bully: the cartoonishly sinister plutocratic omnipresence and Manachiest counterweight to everything good and decent in town, Mr. Potter (Lionel Barrymore).  Potter is the lingering, unregulated nineteenth-century predatory plutocracy that, in modified form, cooked the economy during 1920s, resulting in the Great Depression.  Even Potter comes to recognize George’s quality and unsuccessfully attempts to buy him off.

During the war, George’s bad ear keeps him out of the fighting (unlike the real Jimmy Stewart who flew numerous combat missions in a B-24), and makes himself useful with such patriotic extracurriculars as serving as an air raid warden, and organizing paper, rubber, and scrap drives.  And yet he seems to have adapted to his fate of being involuntarily tethered to the small financial institution he inherited from his father, and therefore the role of the town’s protector. He seems more-or-less happily resigned to his fate as a thoroughbred pulling a milk wagon.

 Were George Bailey just another guy in Bedford Falls or most towns in the United States, this would indeed be a wonderful life and indeed for most of us it would be.  Even with all of his disappointments, his life is a satisfactory reply to the unanswerable Buddhist question, “how good would you have it?”  On the face of events, George seems to be a great success at the end of the movie.  In case this is not abundantly apparent from the boisterous but benevolent 1940s Christmastime riot of unabashed exuberance—a reverse bank run or bottom-up version of the New Deal or a spontaneous neighborhood Marshall Plan—at the movie’s end. His brother—now a Medal of Honor recipient—proudly proclaims “To George Bailey, the richest man in town.” This is confirmed in the homey wisdom inscribed in a copy of Tom Sawyer by George’s guardian angel (and silly fictional device and concession to comic relief in a story about attempted suicide) Clarence that “no man is a failure who has friends”.

Of course Clarence is introduced into an already minimally realistic story to provide George with the exquisite but equally silly luxury—“a great gift”—of seeing what would have become of the town and its people without him (although to a lover of hot jazz, the business district of Pottersville—an alternate reality to the occasionally overly precious, Norman Rockwell-esque Bedford Falls—looks fairly attractive, with its hot jazz lounges, jitterbugging swing clubs, a billiards parlor, a (God forbid) burlesque hall, and what seems to be an unkind shot at Fats Waller).

In this Hugh Everett-like alternate narrative device and dark parallel universe, he sees that his wife Mary is an unhappy mouse-like spinster working in a (God forbid) library; that Harry drowned as a child and thus was not alive in1944 to save a fully loaded troop transport.  Likewise, everybody else in the town is an embittered, anti-social, outright bad or tragic version of themselves relative to the personally frustrating yet generally wonderful Rated-G version of George’s wonderful life.

The problem is that George is not ordinary; he is no mere careerist, conventionalist, or money-chasing credentialist—he is a quick-thinking, maverick problem-solver with a heart of gold. He is exactly the kind of person we need now, but whom the establishment of our own time despises.  Although harder to identify on sight, in our own time, the charming and attractive Mr. Potter’s of the world have won.

In literary terms, George is not a typical beaten-down loser-protagonist of the modernist canon; he is not a Bartleby the Scribner, a J. Alfred Prufrock, Leopold Bloom, or Willie Lohman, but then neither is his stolid father (George is perhaps more akin to Thomas Hardy’s talented but frustrated Jude Fawley or a better version of James Hilton’s Mr. Chips—characters who might have amounted to more had they not been limited or constrained by external circumstances).

Rather, George is more in keeping with the great tragic-heroic protagonists of the Greeks and Shakespeare (i.e. a person who could have pushed the limits of the humanly possibility), if only he could have gotten up to bat.  He might have done genuinely great things, had his plans gotten off the ground, had the unforeseen chaos of life and social circumstances not intervened.  Just after breaking his father’s heart by revealing his ambitions, George correctly assesses and confides that the old man is a “great guy.”  True enough.  But the conspicuous fact is that the older Bailey is much more on the scale of a local hero, a “pillar of the community”—a necessary type for any town to extinguish the day-to-day brush fires and is therefore perhaps more fully actualized and resigned to his role (even though it kills him mere hours later—or was it George’s announcement?).  But George has bigger ambitions and presumably abilities to match.

In a perfect world, someone like Mr. Bailey, Sr. would be better (and in fact is) cast in the role to which his son is relegated, even though his ongoing David versus Goliath battles with Potter likely contributed to his early death.  George might have found an even more wonderful life if he had gone to college and law school and then gone to Washington to work for Tommy Corcoran and Ben Cohen, or as a project manager of a large New Deal program, or managing war production against the Nazis and Imperial Japanese.  Instead he organizes scrap and rubber drives and admonishes people to turn off their lights during air raid drills.  In a better world, a lesser man could have handled all the relative evils of Bedford Falls.

Of course the alternative is that George is delusional throughout the film, that he is not as great as we are led to believe, that—like most of us—he is not as good as his biggest dreams. But there is nothing in the film to suggest that this is the case.

The moral for our own time is that we needs both kinds of Mr. Baileys—father and the son—and it is clear that in spite of numerous local victories, George could have done far more in the broader world (his less-interesting younger brother, Harry, seems to have unintentionally hijacked George’s plans and makes a good go of them: he goes off to college, lands a plumb research position in Buffalo as part-and-parcel of marrying a rich and beautiful wife, and then disproportionately helps win a world war, and returns, amazingly, as the same happy-go-lucky person complete with our nation’s highest military honor after lunching with Harry and Bess at the Executive Mansion). George is the Rooseveltian top-down planner and social democrat while Mr. Bailey, Sr., is the organic, Jane Jacobs localist.

Even if we accept Capra’s questionable premise that George’s life is the most wonderful of possible alternatives (or at least pretty darned good), the ending is not entirely satisfactory for people used to Hollywood Endings: George’s likable, but absent-minded, Uncle Billy inadvertently misplaces $8,000 dollars (perhaps ten or twenty-fold that amount in 2018 dollars) into Mr. Potter’s hands (a crime witnessed and abetted by Mr. Potter’s silent, wheelchair-pushing flunky, who, even without a uttering single line in the entire movie, is arguably the most despicable person in it—an equally silent counterpart to the photograph of the late Mr. Bailey, Sr.), and his honest mistake is never revealed nor presumably is the money ever recovered.

Mr. Potter’s crime does not come to light, and George is very nearly framed by the incident and driven to despair. Instead of a watery self-inflicted death in the Bedford River, he is happily bailed out (Bailey is bailed out after bailing out the town so many times), first by a homely angel and then by the now prosperous town of the immediate postwar.

The fact that his rich boyhood chum, the affable frat-boyish Sam Wainwright, is willing to extend $25,000 of his company’s petty cash puts the crisis into wider focus and perspective and makes us realize that George was never was really in that much trouble, at least financially (although the SEC might have found such a large transfer to a close friend with a mysterious $8000 deficit to be suspicious).  Wainwright’s telegram is a comforting wink from Capra himself.  Had he not been so distracted by an accumulation of trying circumstances—the daily slings and arrows of being a big fish in Bedford Falls—this kindness of Sam’s and the whole town is something that George might have intuited himself thus preventing his breakdown in the first place.  The bank examiner (district attorney?), in light of the crowd’s vouchsafing George’s reputation, tears up the summons, grabs a cup of kindness and heartily joins in singing “Hark, the Herald Angel Sings.”

Still, the loss of $8,000 in Bedford Falls was a crisis that almost drove George to suicide.  If he had been a manager of wartime industrial production, a similar loss would have been a rounding error that nobody but an accountant would have noticed.

At the movie’s end, George is safe and obviously touched by the outpouring of his community and appreciates just how god things really are (and you just know that any scene that begins with Donna Reed rushing in and clearing an entire tabletop of Christmas wrapping paraphernalia to make room for a torrential charitable cash flow is going to be ridiculously heart-warming). But at the movie’s end George remains as local and provincial as before, he has just been instructed to be happy with the way things have turned out (why not, it’s almost 1946 in America and everything turned out just fine).  His wonderful life has produced a wonderful effort to meet a (still unsolved) crisis.  Just imagine what he could have done with 1940s Federal funding and millions of similarly well-intended people to manage—like those who engineered the New Deal, the WWII mobilization, and the Marshall Plan. Would his name have ranked along with the likes of Harry Hopkins, Rex Tugwell, Adolph Bearle, Raymond Moley, Frances Perkins, John Kenneth Galbraith, Thomas Corcoran, Benjamin Cohen, Averell Harriman, George Marshall, George Kennan, and Eleanor and Franklin Roosevelt themselves?

It is impossible not to surrender to the warmth and decency of this film’s ending, and I realize that this essay has been minute and dissecting in its analysis.  What is the lesson of all of this?  I think the moral to those of us in 2018 is that below the surface of this wonderful movie is a cautionary tale, and that if we are to face the emerging crises of our own time, we will at the very least require a whole Brains Trust of George Baileys in the right places and legions of local people like his father.  There is a danger in shutting out this kind of person. We must also come to recognize the Mr. Potters of big business and their minions who have dominated for the past half-century.  I suspect that they look nothing like Lionel Barrymore.

The Last Realist: George Herbert Walker Bush

By Michael F. Duggan

There was a time not long ago when American foreign policy was based on the sensible pursuit of national interests.  During the period 1989-1992 the United States was led by a man who was perhaps the most well-qualified candidate for the office in its history—a man who had known combat, who knew diplomacy, intelligence, legislation and the legislative branch, party politics, the practicalities of business and organizational administration, and how the executive and its departments functioned.  For those of us in midlife, it seems like only yesterday, and yet in light of what has happened since in politics and policy, it might as well be a lifetime and a world away.  The question is whether his administration was a genuine realist anomaly or merely a preface to what the nation has become.

Regardless, here’s to Old Man Bush: a good one-term statesman and public servant who was both preceded and followed by two-term mediocrities and mere politicians.  A Commander-in-Chief who oversaw what was arguably the most well-executed large-scale military campaign in United States history (followed by poll numbers that might have been the highest in modern times) only to lose the next election.  A moderate in politics and a good man personally who famously broke with the NRA, gave the nation a very necessary income tax hike on the rich (for which his own party never forgave him), but against his better instincts adopted the knee-to-groin campaign tactics of party torpedoes and handlers in what became one of the dirtiest presidential campaigns in US history (1988) and ushered-in the modern period of “gotcha” politics.

Some critics at the time observed that Bush arose on the coattails of others, a loyal subordinate, a second-place careerist and credentialist who silver-medaled his way to the top, a New England blue blood carpetbagger who (along with his sons) ran for office in states far from Connecticut and Maine.  Such interpretations do violence to the dignity, nuance, diversity, and sheer volume of the man’s life.  Bush was the real thing: a public servant—an aristocrat who dedicated most his life to serving the country.  Prior to becoming President of the United States, Bush served in such diverse roles as torpedo bomber pilot, a wildcat oilman, Member of the House of Representatives, Liaison to a newly-reopened China, U.S. Ambassador to the United Nations, Chairman of the RNC, Director of the CIA, and Vice President of the United States.  He was not, however a spotless hero.

Foreign Affairs

The presidency of George Herbert Walker Bush (just plain “George Bush” prior to the late 1990s) was a brief moment, in some respects an echo of the realism that served the nation so well in the years immediately following WWII.

A foreign policy realist in the best sense of the term, Bush was the perfect man to preside over the end of the Cold War, and my sense is that the most notable foreign policy achievements of the Regan presidency probably belong even more to his more knowledgeable vice president with whom he consulted over Thursday lunches.  As president in his own right, it was Bush who, with the help of a first team of pros that included the likes of Brent Scowcroft, James Baker, and Colin Powell, let up Russia gently after the implosion of the USSR (he knew that great nations do not take victory laps), only to be followed by amateurs and zealots who arrogantly pushed NATO right up to Russia’s western border and ushered-in what looks increasingly like a dangerous new Cold War.  If a great statesman/woman is one who has successfully managed at least one momentous world event, than his handling of the end of the Cold War alone puts him into this category.

Desert Storm

Interpreted as a singular U.S. and international coalition response to a violation to territorial sovereignty of one nation by another—and in spite of later unintended consequences—Desert Shield/Storm was a work of art: President Bush gave fair warning (admittedly risky) to allow the aggressor a chance to pull back and reverse course, masterfully sought and got an international mandate and then congressional approval, built a coalition, amassed his forces, went in with overwhelming force and firepower, achieved the goals of the mandate, got the hell out.  But the success or failure of the “Hundred-Hour War” depends on whether it is weighed as a geopolitical “police action” or as just another episode of U.S. adventurism in the Near East, or as some kind of hybrid.

As a stand-alone event then, the campaign was “textbook,” but then in history there is no such thing as a completely discrete event.  Can the operational success of Desert Storm be separated from what others see as a more checkered geopolitical legacy?  Can the success of the “felt necessities of the time” of a theater of combat be tarnished by later, unseen developments?  Was the “overwhelming force” of the Powell Doctrine (which could equally be called the Napoleon, Grant, MacArthur, or LeMay Doctrine) gross overkill and a preface to the “Shock and Awe” of his son’s war in the region?  Was his calculated restriction of press access in a war zone a precursor to later and even more propagandistic wars with even less independent press coverage?

Just as history never happens for a single reason, nor is any victory truly singular, pure, and unalloyed.  Twenty-six years on, I realize that my rosy construction of what has since become known as the First Gulf War (or the Second Iraq War in the interpretation of Andrew Bacevich) is not shared by all historians.  Questions remain: was Saddam able to invade Kuwait because Bush and his team were distracted by momentous events in Europe?  Was the Iraqi invasion merely a temporary punitive expedition that could have been prevented if Kuwait hadn’t aggressively undercut Iraqi oil profits?  Would Hussein have withdrawn his forces on his own after sufficiently making his point?  Was April Glaspie speaking directly for the President Bush or Secretary Baker when she met with the Iraqi leader on July 25, 1990?  War is a failure of policy, and could the events leading up to the invasion (including public comments made by Baker’s spokesperson, Margaret Tutwiler) have been seen by the Iraqis as a green light in a similar way that the North Koreans could have construed Acheson’s “Defensive Perimeter” speech to the National Press Club in early 1950 as such?  (See Bartholomew Sparrow, The Strategist, Brent Scowcroft and the Call of National Security. 420-421).

Some historians have been more critical in their “big picture” assessments of Desert Storm, claiming that when placed in the broader context of an almost four-decade long American war for the greater Middle East, this was just another chapter in a series of misled escalations (See generally Bacevich, America’s War for the Greater Middle East, A Military History).  In this construction too, the war planners had not decapitated the serpent and had left Hussein’s most valuable asset—the Republican Guard—mostly intact to fight another day against an unsupported American ally who Mr. Bush had arguably encourage to rise up, the Iraqi Kurds (as well as Shiites).

While some of these points are still open questions, the mandate of the U.N. Security Council resolution did not include taking out Hussein.  In light of what happened after 2003, when we did topple the regime, Bush I and his planners seem all the more sensible, in my opinion.  Moreover the “Highway of Death” was beginning to look like just that—a traffic jam of gratuitous murder—laser-guided target practice, “a turkey shoot”against a foe unable defend himself, much less fight back.  With the Korean War as historical example, Scowcroft was cognizant of the dangers implicit in changing or exceeding the purely military goals of a limited mandate in the face of apparent easy victory.  Having met the stated war aims, Powell and Scowcroft both advocated ceasing the attack as did Dick Cheney.  (See Sparrow, The Strategist, Brent Scowcroft and the Call of National Security. 414-415).

When second-guessed about why the U.S. did not “finish the job,” his advisors answered with now haunting and even prophetic rhetorical questions about the wisdom of putting U.S. servicemen between Sunnis and Shiites  (James Baker’s later observation about the war in the Balkans that “[w]e don’t have a dog in that fight” seems to have applied equally to internal Iraqi affairs).  Besides, it would have made no sense to remove a powerful secular counterbalance to Iran, thus making them the de facto regional hegemon.  Did the U.S. “abandon” Iraq while on the verge of “saving” it?  Should the U.S. have “stayed” (whatever that means)?  My takeaway from the history of outsiders in the Middle East is that the only thing more perilous than “abandoning” a fight in the region once apparent victory is secured is to continue fighting, and that once in, there is no better time to get out than the soonest possible moment.  It would seem that the history of U.S. adventures in Iraq since 2003—the Neocon legacy of occupation and nation-building—speaks for itself.

Bush’s apparently humanitarian commitment of American forces to the chaos of Somalia in the waning days of his administration still baffles realist sensibilities and seems to have honored Bush’s own principles in the breach.  It simply makes no sense.  One can claim that it was purely a temporary measure that grew under the new administrations, but it is still hard to square with the rest of Bush’s foreign policy.

Of course there were other successes and failures of a lesser nature: high-handedness in Central America that included a justified but excessive invasion of Panama.  The careful realist must also weigh his masterful handling of the demise of the Soviet Union with what looks like a modest and principled kind of economic globalization and what appears to be a kind of self-consciously benevolent imperialism: the United States as the good cop on the world beat.  The subsequent catastrophic history of neoliberal globalization and of U.S. adventurism have cast these budding tendencies in a more sobering light.

Politics and Domestic Policy

Domestically, Bush’s generous instincts came to the fore early on and reflected the Emersonian “Thousand Points of Light” of his nomination acceptance address, and he did more than most people realize.  He gave us the Americans with Disability Act (ADA)—one of the most successful pieces of social legislation of recent decades—the modest Civil Rights Act of 1991, the 1990 amendment to the Clean Air Act, a semiautomatic rifle ban, successfully handled the consequences of the Savings and Loan Crisis, and of course he put David Souter on the High Court.  Perhaps he did not know how to deal with the recession of 1991.  My reading is that the recession was an ominous initial rumbling of things to come, as American workers increasingly became victims of economic globalization.  Some historians believe that the good years of the 1990s owe a fair amount to Bush’s economic polices, including the budge agreement of 1990.  Bush fatefully underestimated the rise of the far right in his own party, making his plea for a “kinder, gentler” nation and political milieu a tragic nonstarter.  His catch phrase from the 1980 campaign characterizing the absurdity of supply-side economics as “voodoo economics” was spot-on, but was another apostasy that true-believers in his own party were unlikely to forget or forgive.  Certainly he did not do enough to address the AIDS crisis.

It is shocking that a man of Bush’s sensibilities and personal qualities conducted the presidential campaign of 1988 the way he did.  Against a second-rate opponent, the “go low,” approach now seems like gross and unnecessary overkill—a kind of political “Highway of Death”—that was beneath the dignity of such an honorable man.  On a similar note, it is hard to understand his occasional hardball tactics, like the bogus fight he picked with Dan Rather on live television at the urging of handlers.  Perhaps it was to counter the charges of his being a “wimp.”

Again, this approach seems to have been completely unnecessary—overreaction urged by politicos and consultants from the darker reaches of the campaign arts.  How is it even possible that a playground epithet like wimp would even find traction against a man of Bush’s demonstrated courage, honor, and commitment?  All anybody had to do was remind people that he youngest navy pilot in the Second World War who had enlisted on the first day he legally could, and that he was fished out of the Pacific after being shot down in an Avenger torpedo bomber (but then Bush embodied an ethos of aristocratic modesty and the idea that one did not talk about oneself, much less brag); by comparison, the rugged Ronald Reagan never went anywhere near a combat zone (as a documentary on the American Experience noted, “Bush was everything Regan pretended to be”: a war hero, college athlete, and a family man who children loved unconditionally).  Not sure if Clinton ever made any pretense of fortitude.

We ask our presidents to succeed in two antithetical roles: that of politician and of statesmen, and in recent years, the later has triumphed seemingly at the expense of the former.  Style has mostly trumped substance, something that underscores a flaw in our system and what is has become.  As casualties of reelection campaigns against charismatic opponents, Gerald Ford and “Bush 41” might be a metaphor for this flaw and of our time and a lesson emphasizing the fine distinction that a single-term statesman is generally superior and preferable to a more popular two two-term politician.  Reagan, Clinton, Bush 43, and Obama were all truly great politicians and unless you were specifically against them or their policies, there was a reasonable chance that they could win you over on one point or another with style, communication skills, and magnetic charm.  That said, and unlike the senior Bush, I would contend that there is not a genuine statesman in that group.

It is difficult for any president to achieve greatness in either foreign or domestic affairs, much less in both (as a latter day New Dealer, I would say that FDR may have been the last to master both).  George Herbert Walker Bush was a good foreign policy president and not bad overall—a leader at the heart of a competent administration.  By all accounts, he was good man overall and the people who knew him are heaping adjectives on is memory: dignity, humility, honor, courage, class—a good president and a notable American public servant.  But ultimately personal goodness has little to do with the benevolence or harm of policy, to paraphrase Forrest Gump, good is what good does (some policy monsters are personally charming and even decent while some insufferable leaders may produce great and high-minded policy), and as aging news transforms with greater circumspect into history, the jury is still out on much of the complex legacy of Bush I.

Subsequent events have cast doubt on what seemed at the time to be spotless successes, and realistic gestures now seem more like preface to less restrained economic internationalism and military adventurism.  Still, I am willing to give the first President Bush the benefit of the doubt on interpretations of events still in flux.  Just in writing this, and given what has happened in American politics and policy ever since, I have the sinking feeling that we not see his like again for a long time, if ever again.

Geoffrey Parker

Book Review

Geoffrey Parker, Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century, Yale University Press, 2014, 904 pages.

Crises, Then and Now

Reviewed by Michael F. Duggan

This book is about a time of climate disasters, never-ending wars, economic globalism complete with mass human migration, imbalances, and subsequent social strife–a period characterized by unprecedented scientific advances and backward superstition.  In other words, it is a world survey about the web of events known as the Seventeenth Century.  Although I bought it in paperback a number of years ago, I recently found a mint condition hardback copy of this magisterial tome by master historian, Geoffrey Parker (Cambridge, St. Andrews, Yale, &c.), and felt compelled to write about it, however briefly.  I have always been drawn to this century because of its contrasts as the one that straddles the transition from the Early Modern to the Ages of Reason and Enlightenment and more broadly marks the final shift from Medieval to Modern (even before the Salem colonists hanged neighbors suspected of witchcraft, Leibniz and Newton had independently begun to formulate the calculus).

In 1959, historian H. R. Trevor-Roper presented the macro-historical thesis of the “General Crisis” or the interpretive premise that the Seventeenth Century can be characterized by an overarching series of crises from horrible regional wars (e.g. the 30 Years Wars, the English Civil War and its spillover into Scotland and Ireland) and rebellions, to widespread human migration and the subsequent spread of disease, any number of specific plagues, global climate change, and a long litany of some of the most extreme weather events in recorded history (e.g. the “little ice age”), etc.  When I was in graduate school, I had intuited this premise (perhaps after reading Barbara Tuchman’s A Distant Mirror, about the “calamitous Fourteenth Century”), but was hardly surprised upon discovering that Trevor-Roper had scooped the idea by 40 years.

Parker has taken this thesis and generalized it in detail beyond Europe to encompass the entire world–to include catastrophic events and change throughout the Far East, Russia, China, India, Persia, the greater Near East, Africa, North America, etc.  Others, including Trevor-Roper himself, also saw this in terms of global trends and scope, but, to my knowledge, Parker’s book is the fullest and most fleshed-out treatment.  It is academic history, but is well-written (and readable for a general audience), and well-researched history on the grandest of scales.  For provincial Western historians (such as myself), the broader perspective is eyeopening and suggestive of human commonality rather than divergence; we are all a part of an invasive plague species and we are all victims of events, nature, and our own nature.

Although I am generally skeptical of macro interpretive theories/books that try to explain or unify everything that happened during a period under a single premise–i.e. the more a theory tries to explain, the more interesting and important, but the weaker is usually is as a theory and therefore the less it explains (call it a Heisenberg principle of historiography)–this one may to be on to something, at least as description.  The question(s), I suppose, is the degree to which the events of this century, overlapping or sequential in both geography and time, are interconnected or emerge from common causes or if they were a convergence of factors both related and discrete, or rather is the century a crisis, a sum of crises, or both?  To those who see human history in the broadest of terms–in terms of of the environment, of humankind as a singular prong of biology, and therefore of human history as an endlessly interesting and increasingly tragic chapter of natural history–this book will be of special interest.

As someone who thinks that one of the most important and productive uses of history is to inform policy and politics, it is apparent (obvious, really) that the author intends this book to be topical–a wide-angle and yet detailed account of another time for our time.  In general the Seventeenth Century is good tonic for those who believe that history is all sunshine and roses or that human progress (such as it is) is all a rising road.  A magnum opus of breathtaking scope and ambition, this book is certainly worth looking at (don’t be put off by its thickness, you can pick it up at any time and read a chapter here or there).



Fat Man and Little Boy

I wrote this for the 70th Anniversary of the atomic bombings of Japan.  It appeared in an anthology at Georgetown University.  This is taken from a late draft, but the editing is still a bit rough.


Roads Taken and not Taken: Thoughts on “Little Boy” and “Fat Man” Plus-70

By Michael F. Duggan

We knew the world would not be the same.  A few people laughed, a few people cried. Most people were silent.  I remembered the line from the Hindu scripture, the Bhagavad Gita… “I am become Death, the destroyer of worlds.”

-Robert Oppenheimer


When I was in graduate school, I came to characterize perspectives on the decision to drop the atomic bombs on Japan into three categories.

The first was the “Veterans Argument”—that the dropping of the bombs was an affirmative good.  As this name implies, it was a position embraced by some World War Two veterans and others who had lived through the war years and seems to have been based on lingering sensibilities of the period.  It was also based on the view the rapid end of the war had having saved many lives—including their own, in many cases—and that victory had ended an aggressive and pernicious regime.  It also seemed tinged with an unapologetic sense of vengeance and righteousness cloaked as simple justice.  They had attacked us, after all—Remember Pearl Harbor, the great sneak attack?  More positively, supporters of this position would sometimes cite the fact of Japan’s subsequent success as a kind of moral justification for dropping the bombs.

Although some of the implications of this perspective cannot be discounted, I tended to reject it; no matter what one thinks of Imperial Japan, the killing of more than 150 thousand civilians can never be an intrinsic good.  Besides there is something suspect about the moral justification of horrible deeds by citing all of the good that came after it, even if true.1

I had begun my doctorate in history a few years after the 50th anniversary of the dropping of the Hiroshima and Nagasaki bombs, and by then there had been a wave of “revisionist” history condemning the bombings as intrinsically bad, as inhumane, and unnecessary—as “technological band aides” to end a hard and bitter conflict.  The argument was that by the summer of 1945, Japan was on the ropes—finished—and would have capitulated within days or weeks even without the bombs.  Although I had friends who subscribed to this position, I thought that it was unrealistic in that it interjected idealistic sensibilities and considerations that seemed unhistorical to the period and the “felt necessities of the times.”

This view is was also associated with a well-publicized incident of vandalism against the actual Enola Gay at a Smithsonian exhibit that ignited a controversy that forced the museum to change its interpretive text to tepid factual neutrality.

And then there was a kind of middle-way argument—a watered-down version of the first—asserting that the dropping of the bombs—although not intrinsically good—was the best of possible options.  The other primary option was a two-phased air-sea-land invasion of main islands of Japan: Operation Olympic scheduled to begin on November 1, 1945, and Operation Coronet, scheduled for early March 1946 (the two operations were subsumed under the name Operation Downfall).  I knew people whose fathers and grandfathers were still living who had been in WWII, and who believed with good reason that they would have been killed fighting in Japan.  It was argued that the American casualties for the war—approximately 294,000 combat deaths—would have been multiplied two or three fold if we had invaded, to say nothing about the additional millions of Japanese civilians that would have likely died resisting.  The Okinawa campaign of April-June 1945, the viciousness and intensity of the combat there and appalling casualties of both sides were regarded as a kind of microcosm, a prequel of what an invasion of Japan would be like.2

The idea behind this perspective was one of realism, that in a modern total war against a fanatical enemy, one took off the gloves in order to end it as soon as possible.  General Curtis LeMay asserts that it was the moral responsibility of all involved to end the war as soon as possible, and if the bombs ended it by a single day, then using them was worth the cost.3  One also heard statements like “what would have happened to an American president who had a tool that could have ended the war, but chose not to use it, and by doing so doubled our casualties for the war?”  It was simple, if ghastly, math: the bombs would cost less in terms of human life than an invasion.  With an instinct toward the moderate and sensible middle, this was the line I took.

In graduate school, I devoured biographies and histories of the Wise Men of the World War Two/Cold War era foreign policy establishment—Bohlen, Harriman, Hopkins, Lovett, Marshall, McCloy, Stimson, and of course, George Kennan.  When I read Kai Bird’s biography, Chairman, John McCloy and the Making of the American Foreign Policy Establishment, I was surprised by some of the back stories and wrangling of the policy makers and the decisions behind the dropping of the bombs.4  It also came as a surprise that John McCloy (among others), had in fact vigorously opposed the dropping of the atomic bombs, perhaps with very good reason.

Assistant Secretary of War John McCloy was nobody’s idea of a dove or a pushover.  Along with his legendary policy successes during and after WWII, he was controversial for ordering the internment of Japanese Americans and for not bombing the death camps in occupied Europe, because doing so would divert resource from the war effort and victory.  He was also the American High Commissioner of occupied Germany after the war and had kept fairly prominent Nazis in their jobs and kept out of prison German industrialists who had played ball with the Nazi regime. Notably, in the1960s, he was one of the only people on record who flatly stood up to President Lyndon Johnson after getting the strong-armed “Johnson treatment” and was not ruined by it.  And yet this tough-guy hawk was dovish on the issue of dropping the atomic bombs.

The story goes like this: In April and May, 1945, there were indications that the Japanese were seeking a settled end to the war via diplomatic channels in Switzerland and through communications with the Soviets—something that was corroborated by U.S. intelligence.5 Armed with this knowledge, McCloy approached his boss, Secretary of War, and arguably father of the modern U.S. foreign policy establishment, “Colonel” Henry L. Stimson.  McCloy told Stimson that the new and more moderate Japanese Prime Minister, Kantaro Suzuki, and his cabinet, were looking for a face-saving way to end the war.  The United States was demanding an unconditional surrender, and Suzuki indicated that if this language was modified, and the Emperor was allowed to remain as a figurehead under a constitutional democracy, Japan would surrender.

Among American officials, the debates on options for ending the war included many of the prominent players, policy makers and military men like General George C. Marshall, Admiral Leahy and the Chiefs of Staff, former American ambassador to Japan, Joseph Grew, Robert Oppenheimer (the principle creator of the bomb), and his Scientific Advisory Panel to name but a few.  It also included President Harry Truman.  Among the options discussed was whether or not to give the Japanese “fair warning” and if the yet untested bomb should be demonstrated in plain view of the enemy.  There were also considerations of deterring the Soviet, who had agreed at Yalta to enter the war against Japan, from additional East Asian territorial ambitions.  Although it was apparent to Grew and McCloy, that Japan was looking for a way out, therefore making an invasion unnecessary, the general assumption was that if atomic bombs were functional, they should be used without warning.

This was the recommendation of the Interim Committee, that included soon-to-be Secretary of State, James Byrnes, and which was presented to Truman by Stimson on June 6.6  McCloy disagreed with these recommendations and cornered Stimson in his own house on June 17th.  Truman would be meeting with the Chiefs of Staff the following day on the question of invasion, and McCloy implored Stimson to make the case that the end of the war was days or weeks away and that an invasion would be unnecessary.  If the United States merely modified the language of unconditional surrender and allowed for the Emperor to remain, the Japanese would surrender under de facto unconditional conditions.  If the Japanese did not capitulate after the changes were made and fair warning was given, the option for dropping the bombs would still be available.  “We should have our heads examined if we don’t consider a political solution,” McCloy said.  As it turned out, he would accompany Stimson to the meeting with Truman and the Chiefs.

Bird notes that the meeting with Truman and the Chiefs was dominated by Marshall and focused almost exclusively on military considerations.7  As Bird writes “[e]ven Stimson seemed resigned now to the invasion plans, despite the concession he had made the previous evening to McCloy’s views.  The most he could muster was a vague comment on the possible existence of a peace faction among the Japanese populace.”  The meeting was breaking up when Truman said “No one is leaving this meeting without committing himself.  McCloy, you haven’t said anything.  What is your view?” McCloy shot a quick glance to Stimson who said to him, “[s]ay what you feel about it.”  McCloy had the opening he needed.8

McCloy essentially repeated the argument he had made to Stimson the night before.  He also noted that a negotiated peace with Japan would preclude the need for Soviet assistance, therefore depriving them of any excuse of an East Asian land grab.   He also committed a faux pas by actually mentioning the bomb by name and suggesting that it be demonstrated to the Japanese.  Truman responded favorably, saying “That’s exactly what I’ve been wanting to explore… You go down to Jimmy Byrnes and talk to him about it.”9  As Bird points out,

[b]y speaking the unspoken, McCloy had dramatically altered the terms of the debate.  Now it was no longer a question of invasion.  What had been a dormant but implicit option now became explicit.  The soon-to-be tested bomb would end the war, with or without warning.  And the war might end before the bomb was ready.” but increasingly the dominant point of view was that the idea of an invasion had been scrapped and in the absence of a Japanese surrender, the bombs would be dropped.10

After another meeting with what was called the Committee of Three, most of the main players agreed “that a modest change in the terms of surrender terms might soon end the war” and that “Japan [would be] susceptible to reason.”11  Stimson put McCloy to work at changing the terms of surrender, specifically the language of Paragraph 12 that referenced the terms that the Japanese had found unacceptable.  McCloy did not mention the atomic bomb by name.  But by now however, Truman was gravitating toward Byrnes’s position of using the bombs.

After meeting with the president on July 3, Stimson and McCloy “solicited a reluctant invitation” to attend the Potsdam Conference, but instead of traveling with the President’s entourage aboard the USS Augusta, they secured their own travel arrangements to Germany.  Newly sworn-in Secretary of State, James Byrnes, would sail with the president and was a part of his onboard poker group.12  The rest, as they say, is history.

At Potsdam, Truman was told by the Soviets that Japan was once again sending out feelers for a political resolution. Truman told Stalin to stall them for time, while reasserting the demand for unconditional surrender in a speech where he buried the existence of the bombs in language so vague, that it is likely that the Japanese leaders did not pick up on the implications.13  Japan backed away.  Truman’s actions seem to suggest that, under Byrnes’s influence (and perhaps independent of it), he had made his mind to drop the bombs and wanted to sabotage any possibly of a political settlement.  As Bird notes, “Byrnes and Truman were isolated in their position; they were rejecting a plan to end the war that had been endorsed by virtually all of their advisors.”14  Byrnes’s position had been adopted by the president over the political option of McCloy.  As Truman sailed for home on August 6, 1945, he received word that the uranium bomb nicknamed “Little Boy” had been dropped on Hiroshima with the message “Big bomb dropped on Hiroshima August 5 at 7:15 P.M. Washington time.  First reports indicate complete success which was even more conspicuous than earlier test.” Truman characterized the attack as “The greatest thing in history.”15  Three days later the plutonium bomb “Fat Man” fell on Nagasaki.  The Soviets entered the fighting against Japan on August 8.  The war was over.

Given Byrnes’s reputation as a political operative of rigid temperament and often questionable judgment, one can only wonder if the dropping of the bombs was purely gratuitous.  Did he and he president believe that the American people wanted and deserved their pound of flesh almost four years after Pear Harbor and some of the hardest combat ever fought by U.S. servicemen?16  Of course there were also the inevitable questions of “what would Roosevelt have done?”

With events safely fixed in the past, historians tend to dislike messy and problematic counterfactuals, and one can only wonder if McCloy’s plan for a negotiated peace would have worked.  One of the most constructive uses of history is to inform present-day policy decisions through the examination of what has worked and what has not worked in the past, and why.  Even so the vexing—haunting—queries about the necessity of dropping the atomic bombs remain as open questions.  The possibility for a political resolution to the war seems at the very least to have been plausible.  The Japanese probably would have surrendered by November, perhaps considerably earlier, as the result of negotiations, but there is no way to tell for certain.17  As it was, in August 1945, Truman decided to allow the Emperor to stay on anyway, and our generous reconstruction policies turned Japan (and Germany) into a miracle of representative liberal democracy and enlightened capitalism.

Even if moderate elements in the Japanese government had been able to arrange an effective surrender, there is no telling whether the Japanese military, and especially the army, would have gone along with it; as it was—and after two atomic bombs had leveled two entire cities—some members of the Japanese army still preferred self-destruction over capitulation, and a few even attempted a coup against the Emperor to preempt his surrender speech to the Japanese People.

This much is certain: our enemies in the most costly war in human history have now been close allies for seven decades (as the old joke that goes, if the United States had lost WWII, we would now be driving Japanese and German cars).  Likewise our Cold War enemy, the Russians, in spite of much Western tampering within their sphere of influence, now pose no real threat to us.  But the bomb remains.

Knowledge may be lost, but an idea cannot be un-invented; as soon as a human being put arrow to bow, the world was forever changed.  The bomb remains.  It remains in great numbers in at least nine nations and counting, in vastly more powerful forms (the hydrogen bomb) with vastly more sophisticated means of delivery.  It is impossible to say whether the development and use of the atomic bomb was and is categorically bad, but it remains for us a permanent Sword of Damocles and the nuclear “secret” is the knowledge of Prometheus.  It is now a fairly old technology, the same vintage as a ’46 Buick.

The bombings of Hiroshima and Nagasaki broke the ice about the use of these weapons in combat and will forever live as a precedent for anyone else who may use it.  The United States is frequently judgmental of the actions and motives of other nations, and yet the U.S. and the U.S. alone is the only nation to have used nuclear weapons in war.  As with so many people in 1945 and ever since, Stimson and Oppenheimer both recognized the atomic bomb had changed everything.  More than any temporal regime, living or dead, it and its progeny remain a permanent enemy of mankind.



  1. For a discussion of the moral justification in regard to dropping the atomic bombs, see John Gray, Black Mass, New York: Farrar, Strauss and Giroux, 2007, pp 190-191.
  2. For an account of the fighting on Okinawa, see Eugene Sledge, With the Old Breed, New York: Random House, 1981.
  3. LeMay expresses this sentiment in an interview he gave for the 1973 documentary series, The World at War.
  4. Generally Chapter 12, “Hiroshima”. Kai Bird, Chairman, John J. McCloy and the Making of the American Establishment, New York: Simon and Schuster, 1992, pp. 240-268.
  5. Bird, p. 242.
  6. Bird, p. 244.
  7. Bird, p. 245.
  8. Bird, p. 245.
  9. Bird, p. 246.
  10. Bird, p. 250.
  11. Bird, pp. 247-248.
  12. Bird, p. 249-250. Averell Harriman and Elie Abel, Special Envoy to Churchill and Stalin, 1941-1946, New York: Random House, 1975, 493.Bird, p. 251.  It should be noted that most of the top American military commanders opposed dropping the atomic bombs on Japan. As Daniel Ellsberg observes: “The judgment that the bomb had not been necessary for victory—without invasion—was later expressed by Generals Eisenhower, MacArthur, and Arnold, as well as Admirals Leahy, King, Nimitz, and Halsey. (Eisenhower and Halsey also shared Leahy’s view that it was morally reprehensible.)  In other words, seven out of eight officers of five star rank in the U.S. Armed Forces in 1945 believed that the bomb was not necessary to avert invasion (that is, all but General Marshall, Chief of Staff of the Army, who alone believed that an invasion might have been necessary.’ [Emphasis added by Ellsberg].  See Daniel Ellsberg, The Doomsday Machine, New York: Bloomsbury, 2017, pp262-263.                            As it happened, Eisenhower was having dinner with Stimson when the Secretary of War received the cable saying that the Hiroshima bomb had been dropped and that it had been successful.  “Stimson asked the General his opinion and Eisenhower replied that he was against it on two counts.  First, the Japanese were ready to surrender and it wasn’t necessary to hit them with that awful thing.  Second, I hate to see our country be the first to use such a weapon.  Well… the old gentleman got furious.  I can see how he would.  After all, it had been his responsibility to push for all of the expenditures to develop the bomb, which of course he had the right to do, and was right to do.” See John Newhouse War and Peace in the Nuclear Age, New York: Alfred A. Knopf, 1989, p. 47.  Newhouse also points out that there were numerous political and budgetary considerations related to the opinions of the various players involved in developing and dropping the bombs.  One can only hope that budgetary responsibility/culpability did not (or does not) drive events.
  13. Harriman, p. 293.
  14. For his own published account of this period, see James F. Byrnes, Speaking Frankly, New York: Harper Brothers & Company, 1947.
  15. See Robert Dallek, The Lost Peace, New York: Harper Collins, 2010, p. 128. Dallek makes hit point, basing it on the Strategic Bombing Survey, as well as the reports of Truman’s own special envoy to Japan after the war in October 1945.


Daniel Ellsberg

Book Review

Daniel Ellsberg, The Doomsday Machine, Confessions of a Nuclear War Planner, New York: Bloomsbury, 2017, 420 Pages, $30.00 (hardcover).

In the Shadow of the Mushroom Cloud (or: Bigger than the Pentagon Papers)

Reviewed by Michael F. Duggan

Before many centuries more… science may have the existence of mankind in its power, and the human race commit suicide by blowing up the world.

-Henry Adams


As it turns out, Stanley Kubrick got it mostly right.

“We came out into the afternoon sunlight, dazed by the light and the film [Dr. Strangelove] both agreeing that what we had just seen was essentially  a documentary.  (We didn’t yet know—nor did SAC—that existing strategic operational plans, whether for first strike or retaliation, constituted a literal Doomsday Machine, as in the film.)”  Daniel Ellsberg, The Doomsday Machine, p. 65.

You should read this book, but not at bedtime.

As a nuclear strategist in the late 1950s and 1960s this was the story Daniel Ellsberg wanted to tell, but that fact that “Vietnam is where the bombs are falling right now [1969]” forced his hand and diverted his attention elsewhere.  The overarching theme of his recent book—the overwhelming feeling one comes away withis that it is almost literally a miracle or a fortuitous aberration of probability that the United States and Soviet Union did not blow up the world during the Cold War.  What is more is that the risk is still in place and that the threat of a nuclear war is greater than ever.  A moral of the book is that wholesale war against civilians characterized by strategic terror bombing and which reached its apex in the omnicidal possibilities of nuclear war is not only immoral and a dubious means of winning wars.  It is likely the grandest expression of the irrationality of war and of our aggressiveness as an animal.

In a sense, Ellsberg is a latter-day American Siegfried Sassoon—the true believer-turned-apostate in the name of humanity, the patriot with a greater commitment to the truth, the man who saw insanity and folly and chose sense and sanity.  Of course his name will always be associated with the Pentagon Papers that exposed the true motives of the war in Vietnam—a rivulet font that contributed to the deluge that eventually forced President Nixon from office.  He is arguably the prototype of the modern whistle-blower.  The present book tells an even bigger story and one that its author has waited a half-century to tell.

In fact Ellsberg came close to telling this story at the time, but the thousands of pages he copied on nuclear strategy were lost in an almost comical sequence of events including the intervention of a tropical storm, and which by his own admission, likely spared him decades of hard prison time.  He can now rely on his own memory corroborated by material declassified over the years without fear of breaking the law.  Although much of the material here was previously known by historians of the Cold War, it is still likely to shock when presented so starkly by a person so intimately connected with the topic.

Ellsberg begins by recalling that as a thirteen-year-old, he and his ninth grade friends immediately latched on to the inherently problematic, the unavoidable and insurmountable implications of the mere existence of super-weapons that could destroy entire cities in a single blow, and of nations armed with such technology.  These high school freshmen hit upon conclusions usually associated with physicists working on the Manhattan Project and epitomized with Robert Oppenheimer’s chilling paraphrase of the Bhagavad Gita: “I have become Death, the destroyer of worlds.”

Ellsberg’s social studies instructor, Bradley Patterson, was teaching the concept of “cultural lag,” or the idea that technology runs ahead of the cultural, social, political ability to handle it—i.e. “to control it wisely, ethically, prudently.”  In the fall of 1944, the teacher had his students consider the idea of nuclear weapons (articles on the possibility of a Uranium 235 bomb had already appeared in the Saturday Evening Post and other magazines) as a kind of ultimate or paragon example of this concept.  The students were given a week to write an essay on the implications of such a weapon.

“As I remember, everybody in the class had arrived at much the same judgment.  It seemed pretty obvious: the existence of such a bomb would           be bad for humanity.  Mankind could not handle such a destructive force.  It could not be safely controlled.  The power would be “abused”—that is,     used dangerously, with terrible consequences… A bomb like that was just too powerful.”

The first part of this book, “The Bomb and I”, deals with the ins and outs, the subtleties, caveats, conundrums, hypotheticals and counter-hypotheticals of the game theory logic imposed by nuclear weapons on strategists during the Cold War.  It is a personal history of the implementation of nuclear strategy, unsettling breaches in the system, near accidents and potential for global thermonuclear catastrophe in the Manichean world of U.S.-Soviet relations.  It is Ellsberg’s own story as a wiz kid, a consultant for the Air Force’s RAND (Research ANd Development) Corporation—its in-house think tank.  As with the Pentagon Papers, Ellsberg’s purpose is to present what he saw versus the official line.

Summarizing in his introduction, Ellsberg states eight realities of American nuclear strategy that set the theme of the book.  These are:

  1. “The basic elements of American readiness for nuclear war remain today what they were almost sixty years ago: Thousands of nuclear weapons remain on hair-trigger alert, aimed mainly at Russian targets” and that “the declared official rational” is to deter “an aggressive Russian first strike” is a “deliberate deception.”  According to Ellsberg, “[d]eterring a surprise nuclear attack has never been the only or even the primary purpose of our plans and preparations.”  Rather, “[t]he nature, scale, and posture of our strategic nuclear forces has always been shaped around the requirements of quite different purposes: to attempt to limit the damage to the United States from Soviet or Russian retaliation to a U.S. first strike against the USSR or Russia.  This capability is, in particular, intended to strengthen the credibility of U.S. threats to initiate limited nuclear attacks, or escalate them—U.S. threats of ‘first use’—to prevail in regional, initially non-nuclear conflicts involving Soviet or Russian forces or their allies.”
  2. “The required U.S. strategic capabilities have always been for a first-strike force,” neither a surprise attack nor one “with an aim of striking ‘second’ under any circumstances, if that could ne avoided by preemption.”  In other worlds, [t]hough officially denied, preemptive ‘launch on warning’ (LOW)—either on tactical warning of an incoming attack or a strategic warning that nuclear escalation is probably impending—has always been at the heart of our strategic alert.”
  3. Contrary to popular belief, nuclear weapons have been used “dozens of times in ‘crises’ since their actual combat use over Hiroshima and Nagasaki.  This has been done “mostly in secret from the American people (though not from adversaries).  They have used them in the precise way that a gun is used when it is pointed at someone in a confrontation, whether or not the trigger is pulled. To get one’s way without pulling the trigger is a major purpose for owning a gun.”
  4. “Posing as it does the threat of nuclear attack by the United States to every state that might potentially be in conflict with us (like North Korea), this persistent rejection by the United States of a no-first-use commitment has always precluded an effective nonproliferation campaign. “
  5. “With respect to deliberate, authorized U.S. strategic attacks, the system has always been designed to be triggered by a far wider range of events than the public has ever imagined.  Moreover, the hand authorized to pull the trigger on nuclear forces has never been exclusively limited to the president, nor even his highest military officials.”  “Dead hand” systems of delegation of nuclear launch authority probably exist in the systems of all nuclear powers, most likely including North Korea.
  6. During the Cuban Missile Crisis, “events spiraled out of control, coming within a handbreadth of triggering our plans for general nuclear war”  (and we should bear in mind that this was a crisis presided over by two rational leaders looking for a way out of the standoff).
  7. “The strategic nuclear system is more prone to false alarms, accidents, and unauthorized launches than the public (and even most high officials) as ever been aware.”  Ellsberg notes that false alarms did in fact occur in 1970, 1980, 1983, and 1995.
  8. “Potentially catastrophic dangers such as these have been systematically concealed from the public.”  Not even the Joint Chiefs of Staff realized until 1983 that the nuclear winter that followed a general nuclear war between the U.S. and the U.S.S.R. would probably kill every person on the planet.

He concludes the introduction by observing that “[i]n sum, most aspects of U.S. nuclear planning and force readiness that became known to me half a century ago still exist today as prone to catastrophe as ever but on a scale, as known to environmental scientists, looming vastly larger than was understood then” and more economically, “[t]ragically, I believe that nothing has fundamentally changed.”

It is hard to know where to begin with this book (the eight points above should give the reader a fair, generalized sample to chew on).  It is fascinating history, and, like a hero of fiction, the young Ellsberg, always seems to be in the center of things.  Following Harvard and a three-year hitch as a Marine Corps infantry officer, he is thrown in as a consultant with a brilliant generation of wiz kinds at RAND.  From there he recounts episodes including an eye-opening interview a squadron leader of nuclear-armed aircraft on the front lines of the Cold War, hearing a confession of alleged pre-appointed nuclear authority by an Air Force theater commander, and discussions with other high-level generals including the cigar-chomping Curtis LeMay himself.  He writes a speech intended for President Kennedy that meets with McNamara’s approval but which is given by Deputy Secretary of Defense, Roswell Gilpatrick instead.  He warns the haughty incoming National Security Adviser, McGeorge Bundy about the numerous lapses in the system, including the usurpation of the chain of command and undermining of civilian control.

With academic and military credentials, Ellsberg had a Selig-like knack for being at the right place at the right time.  He was well-qualified to be both a detective of chinks in the system and the deliverer of often shocking messages, but to no avail.  The lesson seems to be that even the planners of nuclear strategy were just as much captive to the self-direct logic of what was seen as a bipolar world as the unsuspecting rest of the nation, and just as helpless to do anything about it.  Although nuclear war is averted by human agency during the Cuban Missile Crisis, the larger game continues and seems mostly immune from the efforts of people who see the madness.

Although the book is well structured—and it is better to read it for oneself rather than have a reviewer recount it chapter by chapter—one comes away with a myriad of troubling facts and imagery, of things generally unknown at the time (and still unknown by most Americans): drummed-up fictions like the missile gap and bogus theatrical props like the nuclear “football”.  One is initially shocked and then overwhelmed and eventually numbed by a sequence of revelations like the inevitability of pre-approved delegation of nuclear launch authority, the daily breakdown of communications between Washington and bases in the Pacific, how commanders and even pilots circumvented launch codes, how the Chiefs of Staff got around civilian control authority, and how civilian authorities were kept in the dark about nuclear war plans.

One is taken aback at the lack of clarity in the minds of the men who would actually be flying nuclear-armed aircraft and under what circumstances might they launch an unauthorized attack (e.g. if the last plane in a squadron crashed on takeoff, thus detonating a thermonuclear weapon on its own base, would the pilot of an aircraft that had already taken off assume that the base had been attacked by the Soviets or Chinese and proceed with an attack in what was intended only to be a drill?).  It is all a stark reminder of how closely we came to blowing up everything and how a Guns of August sequence of events with greater-than Missiles of October technology is still a very real possibility (his retelling of the now well-known story of how close a Soviet submarine under depth charge attack from a U.S. ship on the blockade line came to launching a nuclear weapon during the Cuban Missile Crisis is particularly harrowing).

Having grown up in a military family during the Cold War, I learned of the nuclear standoff of super powers at the tender age of eight or nine.  I was of a generation, the more sensitive members of whom could imagine the contrails of ICBMs imposed on clear nighttime skies.  While I was working on my doctorate in history, I had read John Lewis Gaddis’s masterful Strategies of Containment, and had come away thinking that both sides had unnecessarily ratcheted-up tensions (first with Nitze’s NSC-68 and later with the “New Look” of the Eisenhower years), that the Cold War was an unnecessarily dangerous and “costly political rivalry.”1  I did not know that, just in surviving the period, the world had in fact won a lengthy sequence of lotteries.

On the one hand, American triumphalists and boosters of their nation’s “victory” in the Cold War (now completely squandered) point to the zero-sum, game theory logic of deterrence, of Mutual Assured Destruction, and how it apparently worked.  The idea, seemingly oxymoronic, is generally attributed to  Bernard Brodie and the view that in order to prevent nuclear war, a nation must “be prepared to resort to atomic war” and to make it too terrible to be a viable option.2  Making nuclear war mutually suicidal seems to have accomplished this to date.  But on the other hand, being in a Mexican standoff with the most destructive weapons ever conceived is hardly an admirable position in which to find oneself, and it is a state of affairs that only has to fail once.  Add to this the fact that human beings are naturally aggressive animals, that unhinged leaders come to power from time-to-time, the role of accidents in history, hair-trigger strategies of first strike, and an ever-increasing nuclear club, and the rational reader of Ellsberg’s book can be excused for wanting to get off of the planet.3

Part II. History of Bombing Civilians

The second part of the book, “The Road to Doomsday” is a history of strategic bombing as the natural predecessor to nuclear war.  This part is obviously less personal but gives an impressive outline about how we got to where we are in terms of not batting an eye at accepting civilian deaths as “collateral damage” and seeing non-combatants as legitimate targets in war.  In some respects, this topic is a later chapter, a continuation of the more general history of the growth of modern total warfare since Napoleon and certainly since the American Civil War.  Even so, it is remarkable to compare the unconcealed disgust of commentators like Theodore Roosevelt at the intentional targeting of a (mostly) civilian liner like the Lusitania in 1915, with the causal acceptance of bombing of entire cities in the Second World War by American political leaders and their constituents.

Indeed, as a child in the 1970s reading of the air campaigns of the Second World War, there was no greater symbol of heroism for me than the gorgeous lines and the all-business armament configuration of the B-17 Flying Fortress (the far more effective and severely aerodynamic B-29 never achieved the same appeal), and the brave men who flew them.  To this day, the sight of a B-17 arouses the child in me, although I am certain that German who were children in 1943 or 1944 in Hamburg, Munich, or Dresden, do not share my affection for this plane.

As a practical matter, it is not clear that wholesale strategic bombing is an effective basis for strategy.  Theorists and planners between the wars, like Giulio Douhet, believed that if total war could be brought on the cities and heartlands of an enemy nation, wars could be brought to a quick and decisive conclusion.

As regards Germany, this does not appear to have been the case as aircraft and tank production continued to increase until the final month or two of the war.  In fact, strategic bombing may have only been successful in Europe against oil production and transportation.  In Japan bombing had turned most of the major cities to ashes, and yet American war planners still feared such fierce resistance by the civilian population that they felt justified in dropping two atomic bombs.  Even here it is not clear whether or not the bombs were the decisive factor in ending the war in the Pacific or whether it was the simultaneous intervention of the USSR in that theater, or both.4  It would seem that Japan was mostly defeated on the great island-dotted battlefield of the South Pacific.

Douhet’s dream of aerial war breaking the will of an enemy people does not have a record of the decisiveness that he sought.  One of the most severe bombings campaigns in history did not break the will North Vietnamese, nor did a similarly impressive campaign over North Korea force a surrender.  Bombing does change people, and the behavior of the North Koreans since 1953 and the genocide in Cambodia during the 1970s are likely attributable in large measure to the strategic bombing campaigns launched against them.

In 1946, George Kennan suggested that the world revert back to the limited Jominian wars5—the “cabinet wars” of the eighteenth-century that followed in the wake of the total wars of seventeenth-century Europe.  His idea was that the purpose of war should be to minimize and not maximize casualties, that “[v]iolence… could not be an objective.”6  Nuclear weapons and the logic of Bernard Brodie to make war too horrible to be tolerated in fact makes it obsolete as a practical matter, and the possibility of a war being launched by accident or miscalculation made it additionally intolerable.  And yet as the Flexible Response to Mutual Assured Destruction has demonstrated time and time again in the many regional wars since the early 1960s, limited military options to keep war alive only make it more likely, if less suicidal.

It would seem that at best humans may be forever damned to a condition in which the possibility of complete destruction by total Clausewitzian war with nuclear weapons and subsequent fallout and nuclear winter, or else to embrace an updated version of Flexible Response—limited war that would “keep the game (and the human race) alive” but which makes conflict so easy that it become all but inevitable.7  The result of this return to limited war seems to be a never-ending, mostly unnecessary series of the “semi-war” that James Forrestal, and more recently, Andrew Bacevich, warned of.8

It seems that the latter is already well upon us and will be until it becomes financially unsustainable.  As with total warfare, limited war has also reached new technical heights with drone technology, allowing for the campaigns of remote video game-like strikes of a character arguably intermediate between war and assassination, while the great majority of our people are as oblivious to it as they were to the fact that they were nearly incinerated on a number of occasions during the Cold war and might still be.  In other words, we now have the worst of both worlds: an ongoing state of never-ending limited wars while the nuclear omnipresence remains and could conceivably be triggered by a limited war, a misunderstanding, accident, or deteriorating relations with our old Cold War foes9 .


Regimes come and go, but The Bomb remains.  The club of nuclear states continues to grow (South Africa being the only nation to have relinquished its nuclear weapons), and now includes nations who dislike and distrust each other perhaps even more than the U.S. and U.S.S.R. during the Cold War.  If cautious, rational, and realistic leaders like John Kennedy and Nikita Khrushchev came within a wild card of blowing up the world in October 1962, what are the odds of intentional or accidental nuclear launches in an age with more fingers on more buttons, the virtually unlimited potential of computer hacking, and leaders of widely varying degrees of stability?

It is an open question of whether an accidental or intentional nuclear war is a greater threat to the world than global climate change and the intimately tied issues of human overpopulation and loss of habitat/biodiversity.  The latter is already unfolding and potentially catastrophic climatological changes are already literally in the air and locked-in place.  How fast and how severe these changes will manifest is the great unknowable.  Possibilities between a gradual societal collapse due to environmental catastrophe and nuclear war followed by a nuclear war gives a potential full range of apocalypse from T,S, Eliot’s “bang” to “a whimper,” and Robert Frost’s “fire” to “ice”.10

Regardless, and as with Vietnam in the 1960s, climate change is actually happening while nuclear war remain only a possibility contingent on human folly, stupidity, and irrationality.  As the smartest man who ever lived observed “[t]he unleashed power of the atom has changed everything save our modes of thinking, and we thus drift toward unparalleled catastrophe,” or in more picturesque terms “I don’t know how World War III will be fought, but Would War IV will be fought with sticks and stones.”11

Technology may be lost, at least for a time, but an idea cannot be intentionally destroyed or un-conceived.  A weapon may not be un-invented.  If you live long enough, you will see rival nations and even existential enemies become close allies (a 1970s wisecrack observed that if the U.S. had lost WWII, we would now all be driving German and Japanese cars).  It is clear that The Bomb is a truer and more permanent enemy than any temporal regime.  No conflict is worth destroying the planet over.  Heavy-handed nuclear strategies in a time of declining U.S. economic and military power and an increasing number of nations with nuclear weapons and the rise of China as Eurasian hegemon will likely make the future even more dangerous than the past.  Another negative effect resulting from the end of the Cold War is a sense of complacency that the threat of nuclear war is over.12  Nothing could be farther from the truth.

It is a singular coincidence that the great physicist, Hugh Everett III was a contemporary of Ellesberg’s and was also a nuclear planner (although he did not work for RAND and is not mentioned in the book).  Everett’s “many worlds” interpretation of quantum mechanics suggests the possibility of many parallel universes, each one splitting off as the result probabilistic events.  If his model is correct, one can only wonder how many parallel tracks include worlds that were destroyed by nuclear war.  To date this one has been lucky, but my experience in life has been that luck does not hold out in human events, not over the long run.  Cue Vera Lynn?

In my opinion, this is a book that Americans should read, including young people when they are able to handle the gravity of the subject.  Ellsberg writes in a strong, unpretentious style, but his book is best read closely and carefully from to beginning to end.  It does not skim well.

One should consider reading this book in conjunction with Andrew Bacevich’s history of the Cold War and the rise of the national security deep state, Washington Rules, Stephen F. Cohen’s Soviet Fates, and John Lewis Gaddis’s more conventional history of Cold War strategy, Strategies of Containment. 


  1. Kennan, “Republicans won the Cold War?”, At a Century’s Ending; Reflections, 1982-1995, New York: W.W. Norton & Company 186, 1996.
  2. John Lewis Gaddis, George F. Kennan, an American Life, 233-234, 614. See also Bernard Brodie, ed., The Absolute Weapon, Atomic Power and World Order, 1946, as well as his later Strategy in the Missile Age, Princeton University Press, 1959.
  3. See Edward O. Wilson, “Aggression,” On Human Nature, Cambridge: Harvard University Press, 99-120, 1978.
  4. As regards the origins of modern total warfare, see Stig Forster and Jorg Nagler’s On the Road to Total War, and David Bell’s The First Total War.  As with the Japanese 80 years later, it has been argued that many Southerners would have willingly continued to fight even after the “hard war” campaigns of Grant, Sheridan, and Sherman that prefigure the total wars of the twentieth-century.  See generally Jay Wink, April 1865, the Month that Saved America, New York: HarperCollins, Inc., 2001.
  5. Gaddis, George F. Kennan. 234-235.
  6. Gaddis, George F. Kennan,235.
  7. In a sense, nuclear war—although obviously a form of total warfare—is actually antithetical to Clausewitz.  War is policy “by other means” in Clausewitz’s formulation, but the complete mutual destruction of nuclear war would preclude the achievement of all policy goals.  See John Keegan, A History of Warfare, New York: Alfred A. Knopf, 1993, 381.
  8. Andrew J. Bacevich, Washington Rules, New York: Henry Holt and Company, 27-28, 57-58, 2010.
  9. On the reviving of Cold War tensions with Russia, see Stephen F. Cohen, Soviet Fates and Lost Alternatives: from Stalinism to the New Cold War, New York: Columbia Press, 2009, 2011.  On the rise of China and the decline of the United States, see Alfred W. McCoy, In the Shadows of the American Century, Chicago, IL: Haymarket Book, 2017.
  10. T.S. Eliot, “The Hollow Men, V,” Collected Poems 1909-1962, 92.  Robert Frost, “Fire and Ice,: The Poems of Robert Frost, 232.
  11. Ralph E. Lapp, “The Einstein Letter that Started it All,” The New  York Times Magazine, August 2, 54, 1984.
  12. Such major players of the Cold War as George Kennan, and Robert McNamara became supporters of the antinuclear movement during the 1980s.  The end of the Cold War took much of the wind out of the sails of this effort.  See generally, George F. Kennan, The Nuclear Delusion, New York, Random House, 1983.  See also Robert S. McNamara, “The Nuclear Risks of the 1960s and their Lesson for the Twenty-first Century” In Retrospect, New York: Random House, 337-346, 1995.

New Article

My new article “The Open Hand: Moderate Realism and the Rule of Law” just came out in the Howard Law Journal (Vol. 61 Issue 2).  The hard copy is out, but I am not sure it is available online yet.

The overarching thesis is that if other nations wish to emulate the American legal and judicial systems, the United States should help them, but that we should not aggressively proselytize or foist our system on others.  I also discuss the fact that although rule of law initiatives are seen by some to be idealistic ventures, they are often neoliberal policies used to leverage economic or strategic advantage in the developing world.

Mike Duggan