Tuesday, September 29, 2015


CHURCH OF THE HEAVENLY POOP
Jerry Harkins

            On December 30, 2005 the New York Times ran a piece about teenagers shopping around for religious encounters designed specifically for their age group.  There was a Page 1 photo of a large number of them writhing in various states of ecstasy.  It appears many of them had earlier attended Sunday services elsewhere with their parents and then, instead of repairing to the mall like red-blooded America adolescents, they congregated at the youth services chapel of one the local supermarket churches.  The New Life teen center [1] in Colorado Springs looks more like a nightclub to me but what do I know?  There’s a picture of a youth minister, one Brent Parsley, leading some sort of liturgy.  According to the church web site, the Reverend Parsley says he is married.  His exact words are,  You betcha! (and she's hot).”  He allows that his third favorite book of all time is Everyone Poops by Taro Gomi [2].  In the photo accompanying the article, he’s dressed like a white urban rapper in disheveled layers with ski goggles worn over his forehead.  He is telling his congregation, “Christmas ain’t about presents, yo!  The true meaning of Christmas is my main man:  J.C.”  Deep dude.  Fucking deep.
To some, Reverend Parsley’s service may seem based on the willing suspension of intelligence in favor of unbridled emotional expressionism.  Whatever its appeal, perfervid worship is not limited to teens.  A similar phenomenon can be witnessed among Christians called “Holy Rollers” who are a small minority of fundamentalists who speak in tongues and worship in primitive frenzy.  In one expression of this, adherents are encouraged to handle poisonous snakes thereby adding  elements of danger and excitement.  It would be easy to say all these liturgies tap into the compulsive, concussive power of sex and that may be part of it.  Pain, danger, sexual excitement, ecstasy:  somewhere in that brew there’s the Eros/Thanatos theme that has been part of the religious experience for millennia.  Ecstasy liberates.  Indeed Bacchus and Eros share the epithet Eleutherios, Liberator.  Still, I’m not sure it applies to those writhing teens in Colorado unless at a layer of the subconscious I have no wish to explore.  They’re just too young.  They have too little experience of life to link sex with death never mind with religious ecstasy.  Or, given the natural state of their hormones, to have any need to do so.
            The archetypal gyration of ecstasy is a throwing up and shaking of one’s arms, a universal gesture with a rich semiotic subtext.  Most obviously, it conveys the thrill of victory achieved against significant obstacles.  It also expresses confidence and openness.  A politician arriving at an airport or a dais will often throw up his or her arms as if to accept the laurel wreath or the acclamation of a friendly crowd.  Look, I have nothing to hide.  Of course no one over the age of six believes that of any politician.  But the arms-up gesture also signifies an element of emotional surrender.  It may in fact derive from the hands-up stance universally required of prisoners.
            You used to see a slightly less animated version of the New Life service at the World Youth Days run by the late Pope John Paul II.  These typically involved hundreds of thousands of highly engaged young people but there was never any trashing of downtown, no street theater, no endless chants of protest, no binge drinking and, especially, no dirty words directed at the establishment.  The only offense they gave came from the really bad hymns they invariably sang. Asked why they were spending their summer in the heat and mud, two themes would emerge, neither terribly profound.  First, was simply, “I have to be here” or “I’m called to be here.”  Second was the pure joy of being near the Pope.
            This is not the first time I have found myself out of sympathy with our young people.  Smart as they are, there seems to be much in life that eludes them.  Many have strong opinions about the global economy but absolutely no understanding of it.  They enjoy the most inane entertainment including such dubious jewels in the crown of civilization as Christian Rap.  They tend to dress like slobs and, back home, they often binge like bums.  They don’t read and they don’t write, in many cases because they can’t.  And, yes, an awful lot of them get caught up in the Jesus thing.  They—adore is not too strong a word—a man who has done his utmost to crush the “People of God” theology that emerged from the second Vatican Council fifty some years ago.  They worship at the clay feet of an idol who has degraded and demeaned the female half the human race with his immoral and hypocritical rantings.
            What’s wrong with these kids?  What need did a decrepit old man fill for them?  Why not someone more wholesome?  I do not refer to Brent Parsley.  How about Britney Spears or the Kardashians?  Actually, I think I understand his attraction.  He was pastoral.  He loved these children and what’s more, he respected them.  He had the soul of the poet he once was.  He was courageous in the face of debilitating illness.  He wrote some of the most incisive social commentary of our times.  He faced down the Evil Empire and played a role in moving the world back from the brink of nuclear Armageddon.  Unfortunately, he also preached nonsense and failed miserably the most important test of any cleric, the ability to help people create a satisfying relationship with the divine.  On the contrary he drove many people away from the sacred and he left a legacy of deceit that will be almost impossible to undo.  The Church will crumble and it will be his fault because he was given the last best opportunity to save it and he squandered it tilting at stupidities like priestly celibacy and the use of condoms. [3]
            The kids, I think, know nothing of all this, positive or negative.  I suspect they do know he was crazy but they admired his persistence, his refusal to bend to others, his iron will.  Not a single one of those kids ever read his masterpiece, Centissimus Annus.  Nor have they read any of the sanctimonious drivel he published.  They don’t care, and maybe they shouldn’t.  Adolescence has always been challenging and never more so than at present.  In an Age of Information, today’s young people know far more than they understand.  There is more pressure put upon them from every quarter and, looking forward, they see little relief.  It must seem that the best times are long in the past.  John Paul represented certainty in an uncertain world, loyalty in treacherous world, hope against all hope.  Jesus said to and of Peter, “Upon this rock, I will build my church and the gates of hell shall not prevail against it.”  For all his frailty, John Paul was a rock.  He insisted, against all evidence, that he was infallible.  In the moral sphere, he could not be wrong.  The young people believed him where they’d be far too smart to believe their political leaders, their gurus or even their parents.
            The differences between New Life and World Youth Day are not insignificant but the similarities are impressive.  The late Pope, for example, probably never savored the literary pleasures of Everyone Poops and it is a good bet that Reverend Parsley has never encountered the prose of Thomas Aquinas.  But like all professional religionists their stock in trade is the answer to all of life’s problems, big and small.  Such folks know that, in the immortal words of Forrest Gump, “Shit happens.”  And they are delighted it does.  If it didn’t they’d be out of business.  As it is, they have a vested interest in assuring that it continues to happen and do whatever the can to assure a steady supply.
Notes
1.  The New Life Teen Center is part of the New Life megachurch formerly presided over by Rev. Ted Haggard, a graduate of Oral Roberts University and once regarded as one of the most politically influential evangelicals in America.  He has said that the only difference between President George W. Bush and himself is that he prefers a different brand of pickup truck.  Otherwise he consulted with the White House every Monday.  Rev. Ted was later fired from New Life after admitting to drug use and a liaison with a male prostitute.

2.  An illustrated book from Japan written for children ages “baby to preschool.”  Part of the same series as that classic of children’s literature, The Gas We Pass: The Story of Farts by Shinta Cho .  It may be sacrilegious to wonder what Rev. Parsley’s Number Two all-time favorite book is (assuming, of course, that Number One is the Holy Bible by Daddy-o, the Spook and his main man, the late J.C.  Yo!).


3.  Comparing John Paul II with Pope Francis is irresistible.  Francis too attracts ecstatic crowds but, as a general rule, they appear to be much more diverse and, on average, older.  Like John XXIII, Francis has the chops that would be needed to undo two millennia of the devil’s work as promoted by the church hierarchy.  But like John, he may not have the time.  Moreover, in spite of the theatrics, he has yet to demonstrate that he has the inclination.

Friday, September 11, 2015


MODERN ART
Jerry Harkins



Once upon a time, art was not nearly as convoluted an undertaking as it is today.  Artists did what they did to earn a living or maybe only to nurture their own emotional lives without actually articulating a nebulous theory about it and without the benefit of critics to explain it.  Nor did they need to concern themselves overly with the vicissitudes of a marketplace beyond a handful of patrons.  Aside from works commissioned for churches, the public rarely encountered most of what they produced and art as we understand it was not a force in the lives of most people. Artists were mostly anonymous craftspeople. Of course, there had always been exceptions.  Virgil, for one, was widely acclaimed by both the elite and the ordinary citizens of Rome.  Chaucer himself was not well known to the public at large but The Canterbury Tales was the medieval equivalent of a best seller.  Michelangelo was widely recognized as the greatest artist of his time by contemporary artists, patrons and the Italian public.   Mozart and Handel were favorites of the aristocracy and their large scale works attracted large audiences.  Mozart, however, remained very much a mid-level employee of the Prince-Archbishop of Salzburg and was poor through most of his short adult life. [1]  But Handel died a wealthy man with an extensive art collection.  Thousands of Londoners attended his state funeral and he was buried in Westminster Abbey.

Such exceptions notwithstanding, most artists labored anonymously even as their art often attracted the attention of philosophers, theologians and litterateurs.  Plato and Aristotle both wrote detailed explications of the role of the arts in society and both worried about the power of art to deceive.  In late antiquity, Longinus’ extended essay “On The Sublime” described the Bacchanalian power of art almost admiringly to the point that he is sometimes referred to as Dionysius-Longinus.  The great nominalist debates of thirteenth century Paris dealt extensively with the ethics of representation.  Even Abelard addressed the central issue:  if, as Aristotle wrote, art is the heightened and selected imitation or representation of life, does that mean it automatically violates the standard of truth?  Heightened and selected, after all, implies different from and less accurate than its referent.  The truth is being distorted deliberately to make a point.  The artist who carved the Venus of Willendorf, some 24,000 years ago re-imagined female anatomy to emphasize its fertility symbolism.  Modern cubism distorts to emphasize the multi-dimensionality of the subject.  Picasso’s portraits of his mistress Dora Maar are thoroughly misleading if all you want to know is what the lady looked like.  The artist depicts his unique perspective of her.  No other viewer can possibly share that perspective which may be why early cubism was often greeted with disdain.  Modern music, literature and dance were similarly criticized for precisely the same reason:  they distorted, indeed violated reality.  For many, modern art seems meaningless and even tedious.

Of course, all art inherently distorts the world as perceived by our unmediated senses which are in any event often unreliable. Artists bravely stand outside the mouth of Plato’s cave and try to tell us what they see.  Art proceeds from the imagination and deals in metaphor.  Its purpose is not to deceive but to invite the mind to consider something beyond the obvious.  Some find this threatening.  The Bible, for example, condemns “graven images” [2] as sinful enough to be dealt with in the decalogue.  The second commandment begins, “You shall not make for yourself an image in the form of anything in heaven above or on the earth beneath or in the waters below.”

 By the nineteenth century, however, the perception of art was changing.  The arts were going public and artists were suddenly exalted members of the community.  The transition is exemplified by a famous incident in the spa resort of Teplitz in what is now the Czech Republic.  In the summer of 1812, Beethoven and Goethe, friends a generation apart in age, were walking in a public park when they came upon members of the royal family walking in the opposite direction.  Beethoven told Goethe to keep walking as artists are more important than aristocrats and “they must make way for us,” not the other way around.  Goethe thought differently;  he took off his hat, stepped aside and bowed while Beethoven, hands in pockets, went right through the dukes and their retinue. The royals drew aside to make way for him, saluting him in friendly fashion. Waiting for Goethe who had let the dukes pass, Beethoven told him: “I have waited for you because I respect you and I admire your work, but you have shown too much esteem to those people.” [3]

By the end of the nineteenth century, Puccini to the contrary notwithstanding, La Vie de Bohème was no longer the default condition of artists.  Indeed, art itself was undergoing a profound revolution.  In Paris, painters turned their attention to renditions of ideas about things and to emotional responses to those ideas.  Monet’s 1872  “Impression, Sunrise” had suggested the term for an entire movement.  In an earlier era, this painting might have been a recognizable picture of the harbor at Le Harvre.  Now it appeared to be a quick sketch done at the scene and meant to be taken back to the studio for refinement. [4] At first, critics used the term "impressionism" pejoratively.

Impressionism became the progenitor of a dazzling variety of isms that arose in its aftermath as painters and sculptors sought to depict interior rather than exterior realities.  Most of their works remained grounded in representation even as they were explicitly trying to break away entirely from the object in favor of complete abstraction.  It was a difficult quest.  Ultimately, Jackson Pollock developed a thoroughly abstract genre called “drip painting,” and others such as Mark Rothko and Clyfford Still invented color field painting.

Abstract expressionism, derided by many at first, ultimately became widely if not universally accepted by  connoisseurs, collectors and the general public.  In 2012, a color field painting by Mark Rothko, “Orange, Red, Yellow,” brought $87 million at auction, 2.6 times the highest price ever paid for a Rembrandt.  This was only the thirtieth highest price on record for a painting but, among the top fifty, all but two or three are modern or contemporary.  Acceptance was not, of course, universal.  In 1975, Tom Wolfe published a cri de Coeur denouncing and ridiculing modern art, especially abstract expressionism. [4]  His main complaint seemed to be that the retreat from representation, the “de-objectification” of art, amounted to nothing but pseudo-intellectual masturbation.  It was a theme he returned to twenty-five years later in a remarkable defense of the sculptor Frederick Hart who had  been excoriated by the art establishment for “defacing” the abstract purity of Maya Lin’s Vietnam Memorial by providing a realistic portrayal of “Three Soldiers” nearby.

The entire brouhaha was an exercise in absurdity.  Ms. Lin’s wall may be the most moving war memorial ever erected. It does indeed “de-objectify” the war but, in place of jungles and weapons and burning villages, it forces the viewer to focus on the price we paid in terms of individual lives cut short.  Mr. Hart’s soldiers are majestic as sculpture and innovative as memorial art.  Moreover, they are casting their gaze at the wall with a combination of reverence and grief.  They, together with Glenna Goodacre’s Vietnam Women’s Memorial are a subsidiary but integral part of experience.  They honor the wall and encourage us to see it as they do.

Fifty years after Marshall McLuhan explained the difference between “hot” and “cool” media any debate between the relative merits of representational and abstract art is bootless.  The former is hot, engaging the viewer quickly and completely without requiring extensive interaction.  The latter is cool, demanding a great deal of involvement.  Both communicate between artist and audience.  Neither does so perfectly.  Both seek to seduce your imagination.  Both celebrate the uncertainty and ambiguity that are part and parcel of the human condition.

What is new in “modern” art is not abstraction per se of which there are examples going back thousands of years [5].  But over the past century and a half, artists have developed an entire spectrum of genres which rely more or less on non-representational imagery.  Very quickly, such art has become dominant, especially in North America and Europe.  Many forces encouraged this transition:  the declining influence of the aristocracy and the church, urbanization and a more educated populace, a sudden rise in academic interest in the arts and, not the least, a sense that the humanities in general had reached something of a natural climax.  Museums were established, attracting a large audience of people yearning for culture and advancement.  The British Museum had already opened to the public in 1759 and the Louvre followed in 1793.  Nicholas I admitted the public to The Hermitage in 1852 and the Metropolitan Museum of Art opened in 1866.  With broader interest came both artistic and economic independence and a favorable environment for innovation.

The new forms occasioned a wide range of responses, positive and negative, from a startling array of sources.  Suddenly everybody was a critic.  Modernism was greeted with ridicule by the public press, condemned as “degenerate” by politicians of the left and right and denounced as the “new iconoclasm” by at least one philosopher.  Less than three weeks after leaving the White House, Teddy Roosevelt published a review of the seminal 1913 Armory Show.  He was not impressed by the European “extremists” and recommended that Americans “should keep track of the avant-gardes but by no means approve of them.”  Both he and The New York Times were actually offended by Marcel Duchamp’s “Nude Descending A Staircase, No. 2.”  More recently, the poet John Ciardi summarized what many traditionalists still think of modernism. “Modern art,” he wrote,  “is what happens when painters stop looking at girls and persuade themselves that they have a better idea.”  The negative reactions, however, were overpowered with startling rapidity and, as we have seen, traditionalism was relegated to the defensive.  A similar seachange in perceptions accompanied the arrival of modern poetry and fiction while almost the opposite occurred in the reception of modern music.  But such reactions are a matter of taste which cannot be the subject of edict or fiat.  Thus, critics who proceed from a theoretical bias expose themselves to irrelevance.  What happens in art—any art—is a kind of intercourse between an artist and a witness.   As Rainer Maria Rilke explained to the young poet, “Works of art are of an infinite solitude, and no means of approach is so useless as criticism. Only love can touch and hold them and be fair to them.”

Notes

1.  Mozart earned 150 Florins a year in Salzburg.  The Florin contained 3.5 grams of pure gold (.123 oz.) which, at today’s price ($1,564.20/oz.) would be equivalent to $28,859.49 a year, roughly the median income of all U.S. workers in 2012.  Such calculations are, of course, less reliable and less meaningful than a writer would hope.

2.  Exactly what constitutes a “graven” image is a matter of some ambiguity.  Most English translations of the second commandment (Deuteronomy 5:8 and Exodus 20:4) specifically limit the prohibition to graven or carved images and most commentators believe that the Hebrew original refers only to idols.  But the text clearly refers to all images.  In both books, it reads, “You shall not make for yourself an image in the form of anything in heaven above or on the earth beneath or in the waters below.”  The next verse prohibits the worship of such images but it is clearly referring to one type of the generally prohibited graven images.  Leviticus 26:1 specifically addresses the question of idols.  “Do not make idols or set up an image or a sacred stone for yourselves, and do not place a carved stone in your land to bow down before it.”  This is one of many priestly regulations, not part of the Decalogue.  To complicate matters further, in Numbers 21:8-9, God commands Moses  ‘“Make a snake and put it up on a pole; anyone who is bitten can look at it and live.’  So Moses made a bronze snake and put it up on a pole. Then when anyone was bitten by a snake and looked at the bronze snake, he lived.”  A bronze snake is clearly a graven image.  Of course, this is one of those things you should not try at home.

3.  This description of the famous incident is taken from a written account by Bettina von Arnim, a friend of both Beethoven and Goethe.  There are other accounts offering slightly different details but there is no testimony that I know of by an eyewitness.  For one thing, Beethoven’s mild reproach does not sound like the great man who had a titanic temper.  The famous 1887 painting of the scene is by Carl Rohling who was born 37 years after the event.

4.  It almost seems the Impressionists were borrowing an idea from the Romantic poets.  In 1798, William Wordsworth had written, “…poetry is the spontaneous overflow of powerful feelings:  it takes its origin from emotion recollected in tranquility.”

5.  The book (or extended essay) was titled The Painted Word is a notable example of the New Journalism of which Wolfe was a pioneer.  There is irony in the fact that the journalist who dethroned objectivity from journalism should complain about pretty much the same thing in art.

6.  The pre-historic people of Ireland were among the earliest abstractionists, etching their elaborate spirals and other designs on rocks and in metal.  Some of these appear to be standardized patterns which recur in widely separated locations.  Such constructions as the passage tomb at Newgrange which was built between 3200 and 3100 BCE display elaborate designs some of which seem to be symbolic while others appear to be pure design.  They are much more recent than the cave paintings of Lascaux and other sites in Europe which are astonishingly representational.  Greek and Roman architectural ornamentation used abstract motifs but are, of course, much younger than the Irish material.

Tuesday, August 11, 2015


THINKING ABOUT PLURAL MARRIAGE

Jerry Harkins



Following the Supreme Court’s recognition that gay people have the same right to marry as straights, a number of commentators have remarked that the same logic might apply to those wanting multiple wives (or, more rarely, husbands).  Indeed it may.  The heart of the Court’s opinion as expressed by Justice Kennedy is in its last three sentences which speak of the petitioners’ goals.  Their hope is not to be condemned to live in loneliness, excluded from one of civilization’s oldest institutions. They ask for equal dignity in the eyes of the law. The Constitution grants them that right.”  In dissent, Chief Justice Roberts wrote, “It is striking how much of the majority’s reasoning would apply with equal force to the claim of a fundamental right to plural marriage.”  He concedes that there may be other factors that would militate against polygamy [1] but they are not to be found in the majority opinion.  Of course, there is a very good reason for their absence;  the case at hand had nothing to do with polygamy.  In addition to this red herring, the Chief Justice was also begging the question.  He relies on the unstated assumption that plural marriage will be considered an unthinkable degeneracy.  No acceptable logic could be raised in its defense and, therefore, it is unnecessary for him to explain why it is degenerate.  This is a deeply engrained assumption in American society but is by no means universal which is a good thing because it is also not true.

Plural marriage is an accepted practice in more than fifty countries, mainly those with Muslim majorities.  It is much rarer in the non-Muslim world although there have been and are pockets of it in unexpected places.  In Ireland, for example, until the seventeenth century, Brehon law permitted multiple marriage although with strong protections for the first wife.  Even today, in many societies high status men are allowed and even encouraged, to take multiple mistresses or concubines.  Almost all British monarchs have done so.  Edward VII who reigned from 1901 to 1917 was a notorious philanderer although usually he was slightly more discrete about it than his predecessors.  His great grandson, Prince Charles, the incumbent Prince of Wales, is less prolific but also less discrete.  King Louis XIV of France, the Sun King, was a devout Catholic who nonetheless had nineteen mistresses.  His successor and great grandson, Louis XV (Louis the Beloved) had fewer but among them were the  Mesdames Pompadour and Du Barry and the lovely Irish redhead Marie-Louise O’Murphy who posed fetchingly naked for Francois Boucher. The reason usually given for this playfulness among a class of men not noted for their humor was that official marriages were political or economic unions not love matches.  Perhaps also there is a bit of the Sun King’s conviction that L'Etat c'est moi.

In America’s live-and-let-live society, we seem to be making our way toward a more broadly permissive consensus.  Most of us probably believe that the state has no business unduly burdening freedom of association.  The problem seems to be that our experience with polygyny is rife with forced marriages often involving poorly educated young women.  Prosecutions of polygamists however are typically based on other crimes such as statutory rape and incest. [2]  For example, Warren Jeffs, Prophet of the Fundamentalist Church of Jesus Christ of Latter-Day Saints, is said to have had 78 wives, 24 of whom were 16 or younger.  He is currently serving a life sentence in Texas for two counts of sexual assault involving two children, one a nephew the other a niece.  It is not clear from press reports whether he thought of the victims as spouses (or for that matter as victims).

As soon as the Supreme Court said in Lawrence v. Texas (2003) that anti-sodomy laws violate the constitutional right to privacy, a suit was brought in Utah arguing that the same logic should apply to polygamy. U.S. District Judge Ted Stewart rejected the argument that the state's ban on polygamy violates constitutional rights of religion and privacy, saying the state has an interest in protecting monogamous marriage. "Contrary to plaintiffs' assertion, the laws in question here do not preclude their private sexual conduct," Stewart said. "They do preclude the state of Utah from recognizing the marriage ... as a valid marriage under the laws of the state of Utah."  This is a weak argument in that it does not address the claim to religious protection and does not specify the state’s interest in protecting monogamy.  Of course Judge Stewart did not have to address these issues because the plaintiffs based their case on the absurd proposition that Utah was outlawing private sexual behavior when it was really refusing to recognize polygamous relationships as lawful marriage.  Still, in light of Obergefell v. Hodges, society must now confront those issues head-on.

The First Amendment reads in its relevant part, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.”  The Fourteenth Amendment in its relevant part extends this ban to the states saying, “No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.”  Together, these prohibitions create a strong protection for religion in all its aspects.  But these are not absolute.  Our history is replete with legal challenges to a wide variety of state and federal actions said to violate the Constitution and case law includes many decisions involving logic that is meticulous.  As I have written elsewhere:

Do the children of Jehovah’s Witnesses have to salute the flag during public school morning exercises?  (No.)  May Quakers refuse alternative service in lieu of the military draft? (No.)   De neo-Nazis have the right to parade through the streets of Skokie, Illinois, a largely Jewish suburb of Chicago with many Holocaust survivors? (Yes.)  May a state ban the showing of a film deemed sacrilegious by the National Legion of Decency?  (No.)  More recently, may a private company refuse to provide its employees with federally mandated benefits that offend the owner’s religious beliefs?  (Yes.)  Should American currency refer to God?  Should religious institutions be tax exempt?  May states erect monuments to the ten commandments?  May towns allow crèches in public spaces?  Can Arizona require a loyalty oath invoking “So help me God” as a condition of receiving a high school diploma?  May public high school students pray in public before a football game?  Did the United States and Canada have the right to suppress the Native American ceremony of the potlatch festival or the sun dance because both were considered pagan and uncivilized?  What is the difference between allowing members of the Native American Church to use peyote for sacramental purposes but prohibiting members of The Religion of Jesus to use marijuana?

So, does the First Amendment protect fundamentalist Mormons who wish to practice polygamy among consenting adults?  If not, what public interest supercedes the polygamists freedom of religion?  If so, would governments be required to recognize such marriages for purposes such as taxation, inheritance, liability and the like?

The judicial analysis of polygamy in the United States, begins and virtually ends with Reynolds v. United States, 98 US 145 (1878).  George Reynolds, a Mormon and a bigamist, was voluntarily put forth as a sacrificial lamb by the Church to test a federal law against polygamy which it believed violated its religious freedom.  Speaking for a unanimous court, Chief Justice Waite disagreed:

…we think it may safely be said there never has been a time in any State of the Union when polygamy has not been an offence against society, cognizable by the civil courts and punishable with more or less severity. In the face of all this evidence, it is impossible to believe that the constitutional guaranty of religious freedom was intended to prohibit legislation in respect to this most important feature of social life. Marriage, while from its very nature a sacred obligation, is nevertheless, in most civilized nations, a civil contract, and usually regulated by law. Upon it society may be said to be built, and out of its fruits spring social relations and social obligations and duties, with which government is necessarily required to deal.

This is another weak argument.  First, the court should have learned from the Dred Scott decision (Dred Scott v. Sandford, 60 U.S. 393, 1857) that the mere fact that a legal principle had always been in effect is not decisive as to the question of whether it should continue to be enforced.  Until early in the nineteenth century slavery itself had always been considered perfectly appropriate in most parts of the world.  Second, the question of polygamy is important enough to warrant more than the mere assertion that society is built on a single form of marriage.  Given that polygamy is openly encouraged in the Old Testament and in modern Muslim societies without apparent ill effect, any assertion to the contrary cries out for specifics.

It is hard to see how a law prohibiting polygamy among consenting adults could survive a constitutional challenge today.  Framed in a First Amendment context, today’s Supreme Court might reject such a law unanimously.  The liberals would weigh the harm done to religion against the vague assertion of historical precedent.  The conservatives might see in such a law an unwarranted intrusion of government into private behavior.  Framed any other way, the outcome is less predictable but could flow from the same logic.  The actual harm done to consenting adults might be seen as a possible outcome but not severe.  Should a church be allowed to solemnize polygamous unions and the state refuse to do so, the resulting situation would be not unlike that of thousands of relationships where men keep one or more mistresses. [3] But as long as all parties take good care of all children, pay all taxes and demand no special privileges, the question would come down to a balancing of incommensurables and the outcome is not obvious.

What is obvious, though, is that plural marriage will never be a dominant issue in America the way gay marriage has been.  Americans place a high value on personal fulfillment and it is hard to imagine many of us opting for a life of communal intimacy.  Even among the early Mormons, plural marriage was a minority choice in spite of the fact that the leadership promoted it as the way of perfection. [4] When the Church felt it was time to change, it managed the transition in a careful, wise and compassionate manner that achieved the goal quickly with minimum disruption to the way of life of most of those affected.  There is, however, a tiny remnant of true believers and, perhaps, another small community of people who embrace polygamy for less than religious reasons.  Small as these groups are, they merit consideration as we navigate through the questions raised by changing mores and attitudes.

Notes

1.  As used in this essay, polygamy refers to the practice of having more than one spouse, polygyny refers to situations where one man has multiple wives, and polyandry refers to one woman having multiple husbands.  It is assumed that a marriage (or any other sexual relationship) that is not consensual cannot enjoy the protection of civil law.  A distinction is also made which excludes any such relationships which are less than fully consenual.  Finally, it should be noted that the ideology and practice of free love is an historical reality.  Unlike polygamy, adherents have usually rejected marriage entirely but, in communities like Oneida founded in 1848 by John Humphrey Noyes, there are significant parallels.

2.  I am aware that many of the same arguments advanced here could apply to incestuous marriages as easily as they do to polygamous ones.  Indeed, some laws against incest (e.g., against marriage among close in-laws) are based on myth rather than biology.  The problem is that most incest today involves children, usually girls, and close male relatives and is considered a particularly heinous form of pedophilia even if it appears to be consensual.  From a moral point of view, any sexual contact lacking the informed consent of both parties is a species of rape and the state is entitled to set an age below which informed consent is not possible.

3. I realize this is a fanciful role reversal in which the state would be taking what is essentially a moral position and the church the utilitarian one.

4.  Practical demographics required this.  If every Mormon man had two wives, they would need twice as many women as men.  If every Mormon man had 78 wives…well you can see the problem.



Sunday, July 19, 2015


TAKING STOCK:  THE MADNESS OF MARKETS

Jerry Harkins

“The current crisis has demonstrated that neither bank regulators, nor anyone else, can consistently and accurately forecast whether, for example, subprime mortgages will turn toxic.”
                     —Alan Greenspan, March 10, 2010


Poppycock!  Total, utter nonsense!  The truth is that it is in the nature of bubbles to burst.  The truth is that you can’t make silk purses out of pigs’ ears.  The truth is that anyone who tries to slice and dice subprime mortgages—or subprime anything else—and thereby turn them into investment grade securities needs his or her head examined.  The run-up to the crash of 2008 was an obvious bubble and the subsequent meltdown was entirely predictable.  Its ramifications were near catastrophic and are still being felt around the world.

The markets for stocks, bonds, commodities and other asset classes play several crucial rolls in all capitalist societies.  Economically, they provide the capital companies and governments need to promote innovation and efficiency.  Socially, they provide the basis for virtually all savings especially those meant to support major family expenditures such as housing, education and retirement.  They act as the principal means of encouraging sustainability in terms of resource utilization and other social goals.  By their very nature, they act as insurance that the long term interests of both the economy and the society are taken into consideration in the decision-making process.  Or they should do all these things.  But the crash of 2008 was compelling evidence that these functions were being jettisoned in favor of the extremely dangerous idea that markets are nothing more than ultra short term, high stakes casinos.

There had been early warning signs.  Amidst the euphoria characteristic of the markets around the turn of the century, there were some naysayers, among them the executives of Procter and Gamble who saw portents of stormy weather.  In early 2000, P and G had been looking forward to profit growth of 13% above 1999.  That would have been spectacular performance but on the morning of Tuesday, March 7, 2000 the company released a statement saying that earnings for fiscal year 2000 would likely be only 7% higher than 1999.  By any rational standard this would still be outstanding performance.  But, the minute the stock market opened, P and G took a 30-point or 33% hit.  Within three minutes, $36 billion of its market valuation simply evaporated.  Poof!  By week’s end, the stock shed an additional $4 billion.  It wasn’t a complete surprise.  The market had already taken the company down by 25% from its 1999 high.  Even at 13%, it was no longer a sexy “new economy” company. 

Proctor and Gamble was and is among the bluest of the blue chips.  Founded in 1837, it has always been one of the best managed entities in the world, right up there with IBM and GE.  It is as recession proof as any company can be, home to a roster of global consumer brands including Ivory Soap (1880), Crisco (1911), Tide detergent (1946), Crest toothpaste (1955), Pampers diapers (1961), Folgers coffee (acquired 1963), Bounty paper towels (1965), Clairol hair coloring (acquired 2001) and dozens of others.  By contemporary standards, it is big but not huge.  In 2013, its sales amounted to $84 billion and its net earnings were about $11 billion.  Its market valuation was $220 billion, only $10 billion less than Facebook.  In 2014, it ranked 31 on the Fortune list of the 500 largest U.S. corporations.

In a rational world (or in an efficient market), it simply could not have lost 33% of its value in a matter of minutes.  But what happened to it was not uncommon in 2000 as one company after another performed well but not well enough to satisfy the voracious expectations of growth that Wall Street had come to believe were its birthright.  Stock prices were now being governed by these expectations rather than by conventional measures of value.  James J. Cramer, a pundit for TheStreet.com, taunted traditional value investors by writing, “All of that price-to-earnings flotsam and discount-to-normalized-earnings jetsam didn’t save you today.”  Still, the idea that investors would bail out of P and G because it might grow by only 7% was a sign that something was very wrong in the stock market, something that has gotten progressively worse in the succeeding fifteen years and that threatens both the economic and the social functions of markets.

There had been other warnings.  On October 19, 1987, “Black Monday,” the Dow Jones Industrial Average [1] lost 22.6% of its value on volume that was twice the previous record.  It was the worst day in the history of the market and no one knew why.  It subsequently inspired a vast speculative literature but to this day no one really knows why it happened.  Then and now, much of the discussion centered around “program trading” which was initiated by computerized systems responding to various “trigger” events.  But there had been no obvious triggers.  There were concerns about rising interest rates and an unexpected increase in the trade deficit but there is nothing unusual about such concerns or about Wall Street being surprised by them.  Wall Street is easily surprised.  It was also in some ways a slow motion crash.  On the previous Wednesday, the Dow Jones had lost a then-record 95 points and two days later it shed another 108 points.  On Monday, intense selling pressure overwhelmed the trading systems which were reporting over an hour late thereby adding to the anxiety.  But it was not Armageddon.  The market began to recover on Tuesday suggesting that traders knew it had been oversold.  Two months later, it ended the year up slightly over 1986 and began an historic rise that lasted twenty years with only minor interruptions.  On a long term chart, the 1987 crash appears as a mere blip.  Still it was not your grandfather’s classic panic and the psychology of the market has never been the same.

Extreme crashes are blessedly infrequent but since 1987 global markets have witnessed a spectacular increase in average volatility. [3] For example, in the first quarter of 2015, the DJIA posted an increase of 0.008%, a virtually flat performance.  But on March 30, it had a gain of 263 points, more than 1%.  The following day, the last session of the quarter, yielded a loss of 200 points.  This was not unusual.  Since 1987, the indices have often gone down 1% or more one day and up 1% or more the next day and such volatility can continue for weeks on end.  Why?  It has absolutely nothing to do with the underlying value of the economy or even for its prospects.  Most recently, oil futures prices and interest rate fears have taken much of the blame.  When oil is down, though, the market may rise or fall.  When the Federal Reserve is thought to be signaling the end of low interest rates, it is taken as cataclysmic news.  We seem to be living through a period when tea leaves set off earthquakes.  The market has become a gambling den for professional players who constantly strive to game the system by shedding risk and passing it off to others.  They are true believers in the adage that there’s a sucker born every minute.

Again this is nothing new.  As St. Paul points out, “…the love of money is a root of all kinds of evil.”  He might have added that the love of money can readily transform itself into the love of more money which we call greed.  And greed seems to be an inherent concomitant of free markets and a constant challenge to regulators.  It is essential that all market participants have incentives but it is human nature that incentives tend to become addictive and require more and more to satisfy the hunger.  Like all addictions, the process is insidious corrupting some players in ways they may not even notice. Some very smart people wind up doing stupid things in the belief that they won’t be caught. 

Shortly before he went to jail for insider trading, Ivan Boesky famously told students at the University of California, “I think greed is healthy. You can be greedy and still feel good about yourself.”  This line became famous when paraphrased by Gordon Gekko in “Wall Street” (1987) and by Larry the Liquidator in “Other People’s Money” (1991).  But it is not true.  Once natural self-interest metastasizes into avarice it becomes both sociopathic and self-destructive.  The trick is to recognize and excise the cancer before it reaches that point.  This is easier said than done as is evident from the headline scandals of the last generation.  Among them were Archer Daniels Midland (price fixing, 1995), Enron (accounting fraud, 2001), Tyco (racketeering, 2002), WorldCom (accounting fraud, 2002), Health South (Bribery, 2003) and, of course, Madoff Securities (Ponzi scheme, 2008).

What is captivating about this list is the diversity of the alleged crimes from simple theft to the most exotic financial skullduggery.  The Tyco executives were accused of stealing $600 million from the company by giving themselves unauthorized loans and issuing stock fraudulently.  It was little more than a mugging with the money going to high living including a $2 million birthday party for the Chariman's wife and a $6,000 shower curtain for his bathroom.  It actually hurt almost no one including the shareholders who have seen their stock rise nicely.  The Enron case was quite the opposite:  a grab bag of elaborate almost incomprehensible financial schemes meant to present to the world the picture of a huge, successful energy company.  The outside world knew it as Number 6 on Fortune Magazine’s list of the largest 500 global corporations.  In fact, it was nothing of the sort.  It had already been sucked dry by gross mismanagement and Wonderland accounting designed to conceal the losses and bleed California electric ratepayers.  The key players were paying themselves enormous salaries and bonuses but, like Tyco, those had little long term effect on the company.  Mostly it seemed these geniuses were covering up for their own operating mistakes.  In any event, the important point is that very few people could ever claim to understand their financial machinations.  In this regard, Bernie Madoff was a throwback to a simpler era.  So simple that he was able to fool regulators and clients for twenty years or more depending on whom you believe.  Ultimately the law caught up with him.  His penalty did nothing to deter the next dozen or so Ponzi schemes.

As bad as these scandals were, they did little to affect the market as a whole.  Enron, for example, hurt a lot of people, mostly shareholders to the tune of $74 billion (including employees’ pension savings) and creditors totaling about $67 billion.  However, substantial recoveries were realized and the total actual loss (not counting overpayments by Californians for their utilities) was probably just under $11 billion.  The scandal also contributed to the market downturn of 2000-2003 but its impact was dwarfed by the events of 9/11.  The Madoff scandal may have exacerbated the 2008 meltdown but, again, its was overshadowed by the subprime mortgage fiasco which represents a whole new level of financial deviltry.

Traditionally, residential mortgages were safe investments.  For one thing, people who were not creditworthy generally could not get them.  For another, mortgagees tend to prioritize their mortgage payments each month.  Beginning in the 1980’s, however, banks began to market adjustable rate loans to borrowers with less than excellent credit.  This virtually guaranteed that the default rate would increase.  About the same time, Michael Milken was discovering that high yield or “junk” bonds issued by financially stressed corporations could be packaged together.  Each such package would have a known risk of default which would be fully reflected by their high yields.  In essence, this meant that they could be marketed as though they were high quality investment grade securities.  Milken’s theory might work in an environment of  stable markets but it was soon being applied to packages of subprime mortgages and other instruments that were unstable by definition.

“Securitization” itself can be a useful financial tool and it is important to separate the theory from the insider trading and other abuses that came to be associated with its practice.  Unfortunately, the “rocket scientists” of Wall Street could not leave well enough alone.  They began to create more and more exotic investments by separating out every conceivable aspect of the underlying loans and focusing exclusively on their short term performance prospects.  Given a basic package consisting of, say, 100,000 subprime mortgages, an investor, usually a bank or an insurance company, could buy an instrument that exposed it to only the principal repayment stream or only the interest stream or to some combination of both based on the average yield and/or the average maturity.  The possibilities got even more abstract when investors were able to buy and sell puts and calls on options or future contracts or engage in custom tailored “swaps” of any combination thereof.  In the monkey-see-monkey-do world of high finance, it soon became possible to make exotic bets on interest rates, weather conditions and space on oil tankers.  And this was only the beginning of a process that has rendered investments increasingly abstract and increasingly divorced from the real world and its long term interests.  In many cases, Wall Street sold instruments that had no palpable connection to any underlying value.  In the most notorious cases the bankers who engineered these instruments did so specifically in order to bet against them and thus against the customers who bought them.  It was roulette.  For the customers, Russian roulette.

We have already noted that the increasing volatility of the stock market is not caused by anything in the real world.  A good example of Wonderland markets is the price of oil.  By now we are accustomed to sudden spikes in the price of gas at the pump and we have learned to pay attention to the price of crude oil.  Supply and demand play a small role in this phenomenon but the real culprit is the little understood futures market.  A futures contract is an agreement to buy or sell some commodity—in this case 1000 barrels of crude oil—at a given price on a given date in the future, often 90 days from the date the contract was originally made.  These contracts can be traded at any time during their life and are usually traded many times before they expire.  A few players actually want to buy or sell the oil but the vast majority have no interest in or ability to cope with the real stuff.  They are gamblers, pure and simple, who will eventually settle their positions in cash.  They will be the first to claim they provide important liquidity to the market which is true except that the market needs only a tiny fraction of that liquidity.  Meanwhile, their speculation overwhelms the market for real oil and sets the price for vital commodities like gasoline at the pump and home heating oil.  It is as though the value of the dollar were determined by the Las Vegas blackjack tables.  The same insanity applies to the markets for wheat, soybeans, bacon and, if you remember the movie “Trading Places,” frozen orange juice concentrate.

Future contracts were invented 5,000 years ago in Mesopotamia and have been essential enablers ever since in agriculture and large scale extractive industries.  They provide a level of financial stability to actual producers and users of a wide range of commodities.  Future contracts on interest rates and other financial benchmarks are important risk management tools for banks.  But as a form of gambling, they become more complex over time.  Complexity increases the probability of what mathematicians call “cascading failures” of their component parts which lead to catastrophic failures.  As this is written, the major disaster scenarios are related to “algorithmic trading” most notably to the form known as high frequency trading (HFT), a strategy that redefines the notion of short term from minutes to microseconds.

In today’s market, 2% of the traders employ HFT and related strategies which allow them to buy and sell huge positions instantaneously.  The most popular of these strategies account for between 50% and 75% of daily trading all of which is transacted by computers programmed with highly sophisticated algorithms, statistical models that track a wide variety of trigger events.  The SEC has said that such a system caused the “Flash Crash” of May 6, 2010.  It began with the sale of 75,000 “E-Mini” futures contracts worth $4.1 billion at 3:32 PM.  (E-Minis are bets on the Standard and Poor 500 Index.  Each contract has a nominal value of 50 times the value of the Index at any given moment.)  The May 6 sale triggered other black box systems with the result that E-Minis came to define the real value of the real securities in the Index.  Within 3 minutes, the it dropped 3%.  Between 2:41 PM and 3:00 PM, the Dow dropped 998 points or 9.2% of its value and then turned around and regained most of the loss closing the day down 384 points or 3.2%.  For a half hour, the market experienced a total disconnect from anything resembling the real world. [4]

If all this were merely a case of the inmates running the asylum it would be a tightly circumscribed problem of little concern to most people.  But the stock market is too important to ignore even at a time when fewer than half of Americans are exposed to it.  The various commodity markets are crucial to those directly involved in producing and using the commodities.  Real estate is central to almost everyone.  In short, the financial system is the foundation of the economic, social and cultural architecture of the entire community and it is in desperate need of reform which will not be easy.  Even if the solutions were obvious there is at present no political appetite for reform.  Throughout the developed world there is an epidemic of self interest and economic disarray that bodes ill for the kind of concerted and determined action that was achieved at Breton Woods in 1944.

We tend to forget that money is an artificial construct and that it and the systems involved in storing and circulating it are highly abstract, fragile and reactive.  Neither dollars nor diamonds have any intrinsic value but are “worth” only what someone is willing to pay for them. [5] The systems used to regulate markets tend, on the other hand, to be clumsy compromises based on political ideology and are therefore slow to recognize changing conditions.  Moreover, any changes that are made in the governance of the financial markets must be coordinated with equally profound changes that need to be made in the broader economy.  We face a daunting challenge.  But as timeframes collapse and the global economy becomes ever more interconnected market contradictions become less tenable and more threatening.  Time is short and the river is rising.

Subsequent Events

In the first quarter of 2020, volatility went through the roof of every stock market in the world, supposedly because of the pandemic caused by the coronavirus.  Actually, the down days seemed like nothing but a typical panic in the face of uncertainty.  It was as though the computers thought markets were normally certain and therefore risk-free.  The up days, on the other hand, looked like computers grasping at straws.  It was not edifying.

Earlier, Navinder Sarao who had singlehandedly caused the Flash Crash (see Note 4 below) was convicted in a Chicago courtroom and sentenced to a year of home confinement at his parents' house  in London.  He apologized for his prank, attributing it to Asperger's Syndrome, and claimed to have "found God."  The leniency was supported by the prosecution which thanked him for educating them about exotic stock market fraud.

Notes

1. The Dow Jones Industrial Average (DJIA) of 30 large U.S. companies is a reasonable and widely understood simulacrum for the U.S. equities market.  There are several more accurate measures notably the Standard and Poor’s 500 or the Russell 3000 but all are fairly well correlated and the Dow Jones has the advantage of a much longer history.  The Dow of course is no longer restricted to industrial companies but includes such as American Express, Disney, Goldman Sachs and Wal-Mart.

2.  Not everyone.  In 2013, Pew Research Center reported that investors were primarily white, middle aged college graduates with annual incomes over $75,000.  Currently, slightly less than half of Americans are exposed to the stock market either directly or indirectly, the lowest participation rate in many years.  The recent high was 65% in 2007, the year before the real estate bubble burst and the economy crashed.  Most Americans are not good savers (in part because wages have not kept up with inflation).  The average net worth of American adults is about $301,000 but the median is only $45,000, another reflection of economic disparity.  Most of that net worth is in real estate.

3.  The academic literature is full of analyses of every conceivable kind of volatility almost all of which is useless in the real market.  I use the term here to discuss the fluctuation of price over a given period of time.  Obviously the greater the change and the shorter the period the higher the volatility.

4.  On April 21, 2015, British authorities arrested one Navinder Sarao on a 22-count complaint of market manipulation issued by the U.S. Department of Justice.  Mr. Sarao is a London day trader accused of causing the Flash Crash by “spoofing,” a technique of entering thousands of large but fake buy and/or sell orders simultaneously in an attempt to influence HFT algorithims.  Conservative publications rushed to his defense and legal pundits emphasized that the government would have a hard time proving its case.  The charges against him carried a prison term of 380 years which may have been what encouraged him to cop a plea to one count.  He was sentenced in late 2017.  Anyone seriously interested in the mechanics of high frequency trading and their sociopathic implications should read Flash Boys:  A Wall Street Revolt by Michael Lewis (Norton, 2014).

5.  Intrinsic value is another one of those slippery terms you struggled with in Economics 101 and this is not the place to attempt to resolve it.  Once upon a time, gold was defined as being worth $32 an ounce.  Why?  Because some economists and diplomats said so.  It was arbitrary but useful in a simpler age.  The price (and therefore the value) of gold is now set by the gold market which, like the stock market, is made up mostly of gamblers.  A painting by Pablo Picasso recently sold at auction for close to $180 million.  Please note that no combination of canvas and paint has an intrinsic value of $180 million.