Monthly Archives: August 2013

There’s No Business Like Show Business (Except Politics)

R. Alan Clanton, Thursday Review Editor

Sunday, August 18, 2013: Almost exactly four months ago I wrote in these very pages that “Americans love their sequels.”  I was making a comparison between the motion picture sequel—now an entrenched part of the Hollywood business model—and the political rematch, now seemingly also standard fare.

I referred to what is known in politics as the “BCD” phenomenon: Americans born after 1957, those who reached voting age by 1976, had never known a presidential election that did not include the name of a Bush, a Clinton or a Dole…not until 2008.  That hiatus from BCD was short-lived.

The irony of my admittedly strained analogy between movie retreads and political reruns is that along the increasingly blurry boundary between show business, Hollywood and electronic news (some might argue that that boundary has been nonexistent for years now) the strange and the surreal become commonplace.  Entertainment and music types weigh-in on any issue large or small, Hollywood stars can—and frequently do—run for public office, and octogenarian actors appear at political conventions without an approved script.  But that’s the nature of show business.  Or politics.  Well, maybe both.

Let’s be clear: Hillary Rodham Clinton is running for President.  Her campaign began 15 minutes after Barack Obama took his second Oath of Office this past January.  Some might argue that her campaign never ended from 2007-2008.  Though she has been cagey and non-committal in public and in recent interviews, she has a formidable campaign team already in place and actively making calls to the right people.

And though there are other Democrats and Republicans in the early stages of testing the presidential waters, Hillary Clinton—alone among them—stands as the presumed front-runner: a predictable rerun, perhaps.  Besides, even after her long, bruising primary and caucus battle with Obama, which ended a little over five years ago, it was widely assumed she would remain in the arena.  And in the wide wake created following the 2012 elections, there are few Democrats willing to challenge the presumption of a Clinton candidacy in 2016.  Even vice-president Joe Biden, who is unwilling to close the door completely on his own prospects, seems pre-shrunk when compared to Clinton.

That means the script for the sequel is back on the table, polished and ready for production, with at least one GOP heavyweight willing to step into the role of contender—former Florida governor Jeb Bush.  Talk about a Hollywood reboot.  That’s the nature of show business.

Still, we just can’t seem to leave the graves of Paddy Chayefsky and Marshall McLuhan alone.  And right now, a nasty, stagey, scenery-chewing brawl has ensued over the entertainment value of Clinton’s legacy and her de facto candidacy.

On Friday, August 16, after weeks of heated discussion and public debate, the Republican National Committee agreed to ban both NBC and CNN from participation in debates or forums between GOP candidates in the run-up to the 2016 elections.  The vote was unanimous.  Why the ouster of thr two revered news networks?  Because both CNN and NBC are in production on their own major documentaries (NBC’s film is a docudrama to be aired as a mini-series) regarding the life of Hillary Clinton, and both film projects are believed to be—at least to many conservatives—little more than big-budget marketing devices crafted to establish Clinton’s candidacy as inevitable and the next big thing.

In its statement, the RNC said that the projects were a “thinly-veiled attempt at putting a thumb on the scales of the 2016 presidential election.”  Other Republican strategists and media watchers say, at the least, both networks should agree to offer equal time for similar documentary programs which explore the lives of potential GOP candidates.

CNN was quick to respond, stating for the record that its documentary was still in production and that the GOP’s criticisms were unfounded, and surely premature.   “The project is in the very early stages of development,” said the CNN statement, “months from completion with most of the reporting and the interviewing still to be done. Therefore speculation about the final program is just that. We encourage all interested parties to wait until the program premieres before judgments are made about it.  Unfortunately, the RNC was not willing to do that.”

CNN’s response itself is a thinly-disguised attempt, perhaps, to convey what some suspect: that the CNN project might in fact be more unflinching and critical than some in the GOP expect—after all, how can a news organization seriously tell the story of Hillary Clinton while completely sanitizing the dark chapters and removing all the warts.

But NBC’s relationship to the political situation is more complex.  Robert Greenblatt, chairman of NBC Entertainment, is among several top NBC chiefs who openly supported Clinton in 2008, both with cash and through their powerful connections within the entertainment business.  Greenblatt also supported Obama in 2012.  Though the entertainment division of NBC is wholly separate from NBC News, at least in principle, both fall under the large umbrella of parent company Comcast, which also owns Universal.  This means that NBC’s mini-series may experience an even deeper penetration into TV markets and individual homes than the CNN documentary, and may therefore have a bigger impact on voters.

So, for some within the GOP who believe that both programs may be relatively fair in terms of their portrayal of Clinton—meaning  the projects will surely include the unflattering episodes from the life of Hillary Clinton—the NBC mini-series, especially, raises enormous concerns over equal time and fairness.  But to those who voted at the RNC meeting last week, the implications of Greenblatt’s close political ties to Clinton and to Obama mean that the docudrama will be anything but fair.

A few political watchers and media analysts have pointed out that the GOP brass has wanted to get to this point anyway.  The long, arduous debate season of 2011 and 2012—though seen as initially advantageous to Republicans seeking to test and sharpen their messages of attack as they approached their showdown with Obama—soon proved to be a largely damaging process for the GOP, and especially front-runner Romney.  Those dozens of debates were watched by millions, and each was then endlessly analyzed on cable news forums and blogs for weeks.  What had been viewed as a positive proving-ground for the top tier-candidates quickly turned sour, and since last November, Republican Party chairman Reince Priebus has said repeatedly that the negativity and self-immolation inflicted deep, perhaps irreparable damage to the party’s image going into the fall.

Priebus and others within the GOP now think that fewer debates will result in fewer damaging confrontations.   Hoping to limit the total number of televised debates to between seven and nine, the recent dustups with CNN and NBC give the RNC the tactical opening they needed all along.

Last week the Republican Party organized a massive email campaign titled “The Liberal Media Loves Hillary,” designed to get partisans to take the two networks to task.  The cover letter from Priebus asked followers to sign an electronic petition demanding that CNN and NBC drop plans to air the documentaries.  “The executives at CNN and NBC would rather promote Hillary Clinton’s soon-to-be presidential campaign than remain true to their purposted mission of offering unbiased news coverage.”

That many Republicans suspect the two projects will be tilted in Clinton’s favor comes as no shock.  Days earlier, GOP co-chair Sharon Day suggested in an email that the networks will gloss over many of the darkest chapters of the Clinton story, including newly expanded revelations regarding fundraiser and money-bundler Norman Hsu, accused by authorities of multiple counts of fraud, money-laundering and theft.  Others have asked (and not just conservatives) if these massive documentaries will make room for the Clinton’s current problems arising from their foundation, which ended the year 2012 with a huge $8 million deficit and a series of new questions about how campaign cash and foundation money’s may have been mishandled, and about the relationship between the Clintons and some of those corporate donors, and the complex web of money pipelines between the various entities.

The Clinton Foundation recently moved into a large suite of offices in (are you ready?) the Time-Life Building near Rockefeller Center, and across the street from (are you ready?) NBC Television Studios.  This is an obviously unrelated real estate move, but its irony was already too mouth-watering for some conservatives who see conspiracies and collusion between the Clintons and nearly all media tycoons, up to, and including, the Loch Ness Monster.

It is unclear that either CNN or NBC will be greatly moved by the GOP’s action, though a predictable outcome may be pressure—from stockholders, internal, and external—to at least offer a more carefully vetted and screened editorial process to the two film projects.  CNN says it intends to screen its documentary first in select theaters next year before it airs during prime time (and the safe bet is that it will be recycled numerous times throughout the following weeks) sometime in the spring.

On the other hand, NBC’s mini-series will no doubt be heavily promoted, and the current brouhaha serves to enhance media buzz about the program, which raises the specter of the age old paradox: ban an art show and hundreds more will appear just to see that the fuss is all about.  In this sense the GOP may lose a few short-term points as millions tune in to watch the mini-series, a measurable percentage if which may have been disinclined toward the “story” before the current controversy reached its boiling point.  Comcast and NBC get free publicity.

But the downside for the networks is of course lost viewership when those early debates finally begin, possibly in the late summer of 2015.  If the GOP makes good on its plan to reduce the total number of pre-Iowa debate to as few as six or seven, the competitors of NBC and CNN become the winners by default.  The CNN debate production formula has become an iconic and reliable fixture for those addicted to the political process (though of little interest to those generally allergic to politics in the first place).

But here’s a hypothetical: what happens if one or more of the Clinton documentaries turns out to be so unflinching that it tells the unvarnished truth?  Let the chips fall as they will.  Grab the dirty laundry and, to paraphrase Richard Nixon, go the hangout route.  NBC’s cozy relationship with the Clinton’s reduces the odds that the network of Chet Huntley, David Brinkley, John Chancellor and Tom Brokaw will be the one to present the harsher telling of that story.  So that leaves CNN, in my book the more likely news team to deliver something close to truly balanced and unfiltered.  Does that mean that at some later point the GOP and CNN shake hands and agree to be friends?

That depends on a lot of factors, and one is that CNN may agree to take a close look at its own editing processes to ensure something akin to fairness.  And there is also the legitimate and still non-assessed matter of equal time.  Would it be possible to broker an arrangement by which CNN offers up similar airtime for GOP candidates?  And if so, how would the two entities manage that template?  And which Republican candidates would receive the nod from either their own party or from CNN.

The complexity of those questions makes it unlikely in the current atmosphere that the GOP and the networks will find common ground.  In the meantime the GOP may get its strategic wish: fewer live televised debates in the run-up to 2016.  Clinton’s team continues to work systematically and diligently to establish the resources and tools needed to proceed with her de facto candidacy.

A huge political sequel is on the horizon for Americans, only this time there will be more commercials and previews while we wait in the theater for the feature presentation to begin.

The Great Debt Debate

IMGP3471_crop

White House Burning: The Founding Fathers, Our National Debt, and Why it Matters to You; Simon Johnson & James Kwak.

Book review by R. Alan Clanton | published August 13, 2013; | Thursday Review Editor

Of all the contemporary economic debates between conservatives and liberals, the decades-old struggles over deficit spending and national debt must rank as one of the most contentious. Those debates can also be fluid and politically self-serving, with liberals embracing debt for a decade or so, then, conservatives—if not embracing, at least ignoring national indebtedness. Debt can be a tool, a resource, a curse or a burden, and few elected officials are immune to the seductions of spending taxpayer money for immediate political gratification. Often, the moral high ground can shift depending on which party controls Congress and who is sitting in the Oval Office.

Solutions are sometimes instantly divisive. Many liberals and progressives react to talk of spending cuts as inherently destructive to social programs and entitlements, and promote tax increases (especially among high earners) as a solution. Republicans frame the question as one of fiscal responsibility, and the GOP’s narrative includes government which must learn to live within its means and tax cuts. Neither side wants to talk honestly about raising taxes, and the elements most in need of scrutiny and reform—Social Security, Medicare, defense spending—are the most sacred of cows. Special interest groups and lobbyists on all sides of these issues keep the pressure on those in Washington and make conciliation and frank conversation difficult.

Conservatives and many Republicans have for decades argued that the country can have a balanced budget and restrained spending, and have a growing, healthy economy, but it was during the Presidency of a Democrat, Bill Clinton, that the U.S. finally settled into that illusive, magic contemporary combination. That budget surplus in the 1990s can be attributed to several key factors: the end of the Cold War, which greatly reduced military spending; a generally healthy economy coupled with a long period of economic growth, which could be dated back to the Reagan years and the end of the 1981-82 recession; and an equally long era of relatively low unemployment, which meant that more Americans were earning wages and paying taxes. Both Democrats and Republicans took credit for the success. By the early-to-middle aught years the U.S. economy seemed bullet-proof, ticking along even through 9/11, a variety of gas and oil price shocks, and various Wall Street and international stock disruptions.

But, as we would learn as housing markets began to deflate in 2007, and from the market collapse and bank failures of 2008, the U.S. economy was not as resilient as many thought. The subsequent Great Recession brought about brutal market contractions and heavy unemployment, as well as plenty of blame to go around the table. This time around politicians dodged any accountability, and Democrats and Republicans blamed each other for the enormous mess. In the last months of the Bush presidency and the first year of Obama’s administration, the government spent $1.5 trillion on halting the economic slide and rescuing mortgage institutions, banks and insurance firms from failure. But the damage was done. During the 2012 presidential election season, candidate Mitt Romney sought to position the economy and jobs as the central priority, even as President Obama sought to grab even the smallest economic good news as evidence that his administration must be doing something right.

Meanwhile two major wars, and a trillion and a half spent on financial bailouts and economic stimuli, have driven the nation’s debt to dizzying heights.

The blame game continues even now, with predictable plot twists: budget showdowns; dueling press conferences by stubborn Republicans and obstinate Democrats on Capitol Hill; explanations in the media about “sequestration” and “earmarks”; party-line arguments over who benefits from tax cuts; threats of government shutdowns; layoffs of thousands who work as government contractors; and all the usual hyperbolic talk of paychecks being withheld from park rangers, bridge safety inspectors and VA Hospital nurses; and the specter of seniors waiting on social security checks that never arrive. Where these staged sideshows do not evoke outrage or fear, they induce glazed eyes, or worse, frustration and apathy.

The financial pain of this recession—along with the costs associated with two major wars in the aught years—has forced to the forefront the concept of the national debt and deficit spending, a deeply contentious issue with incalculably huge consequences for the future of the economy of the United States and its trading partners.

To make matters worse, the issues are immense and complex, so seemingly intractable that few reporters, and fewer average Americans, have patience for the business of sorting out what it means to our daily lives and our personal finances. Most reporters, even those who write professionally about economics, seek to simplify—often oversimplify—leaving most folk stuck with nightly platitudes, generalizations and half-truths. Whenever any politician talks about spending cuts, their political opponents wave the bloody flag: cuts will hurt Social Security, Medicare, health care, military spending and other essential programs.

Then, with politicians at their impasse, there are all the familiar patterns as the headlines and television news reports trumpet the “looming budget battle” or the “coming fiscal showdown.” There are standoffs, threats, partisans walking out of meetings, and last minute deals to keep the government operating. Then, there is another round of finger-pointing and blame.
But, in reality, what do these often tiresome political confrontations really mean to us? And are there consequences to a nation that carries the heavy burden of debt?

Authors Simon Johnson and James Kwak have written a timely, carefully-paced book, White House Burning: The Founding Fathers, Our National Debt, and Why it Matters to You, crafted neatly for consumption by any reader who wants a full understanding of what these arguments mean to our future.

The authors deftly graft the historical examples of economic successes and failures, especially the complex issues of taxes and spending, onto the current conversations about national debt.
For the Americans, the problem wasn’t bravery or fortitude, but was instead a matter of resources and money. Congress, though readily inclined to go to war with Britain, had infamously refused President James Madison’s requests for tax increases as a way to pay for war preparations. Many in Congress saw taxes as the most sacred of issues, trumping even defense of the nation, and after a long, arduous political fight Treasury Secretary Albert Gallatin turned to wealthy Philadelphia bankers to finance the war. But it was too late, and the better-trained, better-armed British navy and army inflicted heavy damage to the underprepared and poorly supplied Americans. The lesson extended even further: Britain was able to win those early victories in the Americas even while fighting a sustained, full-scale war in Europe. It was the economic dynamism of England, along with its capacity to raise money through taxes, which had given the British the upper hand.

The burning of Washington had been a painful moment for the still-young United States, and Johnson and Kwak show how that early experience with fiscal crisis set in motion antipathies and entrenchments which would affect the national conversation for centuries.

Using clear examples, the authors walk the reader through the great economic cycles, along with the sometimes inevitable path toward war; the enormous costs of the War of 1812, the Civil War, and World War II, each of which had left the U.S. with mountains of debt. They illustrate how in each of these cases the United States eventually found its way back to stability and prosperity. Debt as an issue facing a nation at the start of war, and after a war’s conclusion, is especially important, as Johnson and Kwak point out. The U.S. paid for World War I largely through tax increases despite the vast increase in spending: federal expenditures had been roughly $700 million in 1916, but had increased to $18 billion by 1919. Further, Congress has finally settled the thorny issue of national taxation with the Sixteenth Amendment, establishing that the federal government’s authority to collect income tax superseded individual states’ interests and fiscal machinations.

But at the end of World War II, the United States faced arguably its most immense debt dilemma, for the U.S. at that point had run up the largest deficit in history, amounting to more than 110% of the American GDP. During the war, traditional military spending had increased to over $80 billion, and the mostly secret cost of the Manhattan project reached $1.6 billion. But the resilience and dynamism of the U.S. economy meant that most of that vast debt was paid off with a few years, in part because of the new economic preeminence of the United States as it converted from war production to a combination of Cold War preparedness and the rise of the middle class.

Johnson and Kwak also trace the processes set in motion as a result of the market collapses of 1929 and the Great Depression, which led first the United States, then other nations, down a variety of misguided policy paths, including a vicious downward cycle of tightened credit, lowering property values and debates over the gold standard. The authors illuminate the central concepts of money itself. What defines money? What makes paper money worth more, perhaps, than coins-other than the convenience of not carrying around heavy metals in one’s pockets or bags? Why do we have Benjamin Franklin to thank for the system of printed money we rely upon to this day? What is the historical importance of the Breton Woods Agreements of 1944-45? And what market forces pushed Richard Nixon to take the U.S. off the gold standard in 1971. Though perhaps obscure to many American readers, each of these chapters from past centuries has altered the trajectory of our conversation about national debt to this day.

The authors discuss the core notions of deficit spending—how it works, why it works, its advantages and its weaknesses. How much do deficits matter in the short run, or the long run?

The national debt that the United States carries is a growing danger for future generations, and a potentially pivotal flaw in U.S. foreign policy and trade, and many conservatives and liberals are in agreement on that central point. But the debate gets prickly when it comes time to find a remedy. The authors offer up a significant, long chapter devoted to solutions—including possible remedies to our most difficult challenges: health care reform, Social Security and Medicare, the three things most likely to raise the decibel level in any political conversation. There are plenty of components of their proposals which will rattle conservatives about as equally as liberals, and a few libertarians might toss the book directly into the trash after just a few chapters.

Johnson and Kwak have peppered the book with numerous charts and graphs—a notably helpful tool in grasping the historical context of debt on a national scale. Their 240 pages of text are also scrupulously researched and fully sourced. Good luck finding even one page free of copious footnotes, and when you reach the end of the text there are still 100 pages of notes and citations. This book is readable and moves surprisingly quickly. On the whole, not light reading, but not overly dense either, at least not for anyone who wants a complete understanding of what our complex arguments about national debt are really about, and why it should matter to you and your children.

– See more at: http://www.thursdayreview.com/NationalDebt.html#sthash.ieYRaSCl.dpuf

“I Believe That Rock Can Do Anything”*

Digital art by Rob Shields

Digital art by Rob Shields

Book review: Who I Am; Pete Townshend; Harper-Collins Books

Review by R. Alan Clanton, Thursday Review Editor

August 1, 2013: The rock and roll documentary has raised its game of late and moved seemingly into its golden age.  Massive, sprawling documentaries have appeared within the last 24 months covering a variety of iconic groups and musical phases of pop and rock: The Rolling Stones (Brett Morgen’s The Rolling Stones: Crossfire Hurricane), George Harrison (Martin Scorcese’s Living in the Material World), The Eagles (History of the Eagles, Part 1 & 2), Sound City Recording Studio (Dave Grohl’s Sound City), Bruce Springsteen (Thom Zimmy’s The Promise: The Making of Darkness on the Edge of Town), Ginger Baker (Jay Bulger’s Beware Mr. Baker), fatherhood among rockers (Andrea Blaugrund Nevins’ The Other F Word), U2 (Davis Guggenheim’s U2: From the Sky Down), and Led Zeppelin (Sonia Anderson’s Dazed & Confused).  Many others preceded these in the middle aught years, including Paul Rachman’s American Hardcore: The History of Punk Rock 1980-1986, Davis Guggenheim’s It Might Get Loud, and Michael Gramaglia’s End of the Century: The Story of the Ramones.

The caliber of these documentaries has been extraordinarily high, save for the sometimes fair complaint that these films—like the decades of musical offerings of their iconic subjects—occasionally wallow in self-indulgence and over-the-topsmanship.  But this is rock and roll, after all.  Give the devil his due.  Besides, no one has ever accused Martin Scorcese of being a stern and stringent editor of his own stuff, and few could suggest that rock legends Mick Jagger or Glenn Frey should suddenly rein themselves in for the sake of a film crew.  Rock and roll is an excessive business; therefore the cinematic telling of its checkered history must excel to excess.

Books are a different matter entirely.  Neil Young’s Waging Heavy Peace and Rod Stewart’s Rod: The Autobiography indicate to us that when the guitars finally rust, iconography—or at least the marketability of the inside story of rock—is portable.  Aging rockers can set the record straight, as long as their memories serve them (and we salute them if their detoxified, threadbare recollections are intact as they enter their senior years), and they can along the way offer apologies, candor, condolences, and contrition, in addition to the heavy name-dropping and amusing party anecdotes.  Readers love excess perhaps more than even fans of the rockumentary.

Still, the partying anecdotes can wear thin: I found a copy of Christopher Andersen’s Mick: The Wild Life and Mad Genius of Mick Jagger (2012) on a sale rack at a bookstore recently, and bought it without hesitation.  After only two days I was not merely disappointed, I was exhausted from the excess.  It was a twofold problem: Andersen’s book is fraught with hazards, not the least of which is endless name-dropping (I think by now we get the point that Mick moved among the circles of other famous people).  Secondly, who cares?   Andersen spent scant few lines of any given chapter actually discussing rock music—its creative processes, the art and craft of recording, or its performance—exchanging the “rock and roll” part of the biography for parties, alcohol, drugs and sex.

This is why it was instantly refreshing as I began my research for this essay to discover (again), that of all the Beatles’ peers, and among those bands who survived well past the end of the Beatles’ era, the group which seemed the most prone to a love of the creative process and musical development was The Who.  And this, it can be easily argued, was the work of one man, Pete Townshend, whose recent autobiography, Who I Am, seems to stand apart from the other literary attempts to explain the deconstructionist milieu that is rock and roll.

Townshend was arguably the first of the hard rockers to demonstrate to the larger musical literati that rock music could truly exist in the same venue with innovation and technical prowess.  Certainly the Beatles had blazed a wide path, bringing into the studio every form of experimentation and textural layering available at the time: the tabla, the sitar and other eastern instruments; all manner of classical stings and horns and entire symphony orchestras; contextual elements and sound effects from all over the map.  In this sense, the Beatles had no contemporary rivals, save perhaps for Jimi Hendrix, and Brian Wilson of the Beach Boys.  In 1969 The Who would complete Tommy, which became then—and remains now—the definitive grand-scale melding of thematic rock and operatic storytelling.  But it was Townsend, with The Who’s release in August 1971 of Who’s Next (42 years ago next week), that proved that the edgiest of hard rock could share the studio and the stage with the Moog synthesizer and other electronic instruments and not look or sound pretentious.

Who’s Next was a landmark moment for rock and roll, and arguably The Who’s crowning achievement.  The album was as innovative as anything seen at the edges of hard rock, and it became one of The Who’s biggest sellers worldwide.  It also established The Who, for all time, as a member of the most elite club one could imagine in the early 1970s—the Global Supergroup, with peers among only The Rolling Stones, Led Zeppelin and the recently fragmented Beatles.  Who’s Next was preceded by a few weeks by the release of “Won’t Get Fooled Again,” a dazzling, gut-wrenching song showcasing the complex dynamism of Keith’s Moon’s percussion, the raw power of John Entwistle’s bass line, the full range and intensity of Roger Daltrey’s vocals, and breathtaking guitar work by Townshend, the end result being one of the greatest rock songs of all time.  Most of the cuts on Who’s Next became classics of hard rock and album-oriented radio, and several of the songs became iconic anthems: “Behind Blue Eyes,” “Going Mobile,” “Bargain,” and of course “Baba O’Riley,” (aka Teenage Wasteland to many rock fans), music durable and fresh even now, over 40 years later.

Townshend’s autobiography comes out the same year as the 40th anniversary of the release of Quadrophenia, an album meant by design to exceed even the power of Who’s Next and take rock music to the next new innovation in sound—“quadrophonic,” which had been intended as the eventual successor to stereo and high fidelity.

The notion of “quadrophonic” had its roots in the group Pink Floyd, whose members had experimented in the late 1960s with studio and live-performance technology which would direct sounds toward listeners from all directions—or at least from four directions.  But, like the other infamous cul de sacs of technology, quadrophonic fell flat, plagued from the beginning with problems of industry standards, high costs, competing patents, and consumer disinterest.  In his book, however, Townshend recalls that the first inkling for him came as he playfully sought to describe a teenager suffering from a four-sided personality disorder—schizophrenia becomes quadzrophenia, a term which later morphed into quadrophenia (without the “z”).

Like his efforts to forge grand-scale fusions of narrative rock, thematic storytelling, youth opera and even on-stage visuals—Who’s Next, like Tommy, had started out as an operatic package originally titled “Lifehouse”—Quadrophenia was also intended to be a multi-tiered epic, drawing in a variety of tools and technical gimmicks and visuals, but also taking the listener inside the head of Townshend’s ongoing central musical character, a disillusioned, confused, splintered but energetic British teenaged “mod,” the abbreviated term used to describe the young, beatnik-rooted modernist jazz, R&B and rock & roll fans of the London nightclub and music scene of the mid-1950s and 1960s.  Mod was also a term used to describe those young people in London of working-class backgrounds who sought release and recreation through music and the late-night world of coffee houses and mild-to-moderate amphetamine drugs (then generally legal in Britain).  The formation and self-identity of The Who had been largely built around the subculture and ethos of the mods, and other British bands also staked out their claim within mod circles, including The Kinks and Small Faces.

This fascination with the mindset of the mods had been Townshend’s cherished creative challenge almost from the beginning of his songwriting career, but during the era of Tommy, Who’s Next and Quadrophenia, it reached the level of obsession.  Even casual musical fans of that era could see the theme shining brightly through all those iconic songs: “My Generation,” “Magic Bus,” “Substitute,” “Young Man Blues” and scores of other tunes offer an unvarnished look into the bewildered, restless and sometimes tormented mind of the typical teen and young adult of British urban life.  Reading Townsend’s book, however, one can closely follow how his fixation on youth culture parallels his creativity and his love of musical expression.  His autobiography reveals what we already knew: Townshend truly loved the creative process and the challenges of musical innovation.

Townshend traces The Who from its earliest incarnations, which included the musical inclinations of his own family: his mother had been a talented jazz singer in several big band combos; his father was a musician who dabbled in jazz, swing and big band sounds, and by his early teenaged years the young Pete was already schooled in various instruments, including the banjo and guitar.  His childhood friend John Entwistle was equally talented on several instruments, including horns and brass.  At about the time that Pete was considering art school, thinking it a clever way to meet girls and possibly persuade them to pose nude, Roger Daltrey had formed a party band called The Detours.  Musical chameleons, The Detours played private parties and small venues using cheap amps and homemade guitars, belting out jazz, country & western, R&B, Dixieland, conga tunes, anything if the gig paid cash.  The band included Doug Sandom and Colin Dawson.  Eventually Daltrey let Townshend audition, and The Detours Jazz Band was set upon its trajectory, with Townshend on guitar and Daltrey on trombone.

Later Sandom and Dawson fell away, but once Keith Moon was added on drums, the band made its final progression toward rock and roll and its metamorphosis from party band to mod symbol was nearly complete.  The combo drew heavily on the rich rhythm and blues sounds from America.  “At the time,” he writes, “we were getting most of our inspiration from growling R&B songs by Bo Diddley and Howlin’ Wolf.”  They had also discovered the value of a bit of chaos, and began their early experiments with feedback and distortion in the studio and on stage.  Even in this early chapter of Townshend’s life he looks in earnest for musical innovation and achievement, realizing that the band will remain obscure and in the shadows without better equipment and a tighter sound, without the right speakers, the right amps, the right instruments, even the right ways to promote the band.

After renaming themselves The Who, their earliest manager, Peter Meaden (the person chiefly influential in guiding them toward the mod movement and subculture), insisted that they try the name The High Numbers, London insider lingo to designate anyone in close second-place proximity to “Faces,” those mods at the top of the trend-setting pyramid, but well above “Tickets,” which were simply the followers and groupies and young people who came to dance.

Live performances began to gather notice and larger crowds, and even their most controversial trademark tricks—fuzzy guitars, amplifier feedback, distortions, Townshend’s leaping and jumping, and especially Moon’s madman drumming—began to solidify the loyalty of their mod fans.  Daltrey’s bellicose, powerful singing, which must have been a shock to many in the audiences not expecting such a potent, soulful sound to come from this cherub-faced, fair-skinned, blonde English boy, very nearly set their signature sound in place.  The Who was on its way toward stardom, but still lacked a moment or two of good luck.

Then, by chance, band members met Chris Stamp and Kit Lambert, who were at the time searching for a subject for their proposed television documentary film project about a small British street band struggling to make it big.  Chris Stamp was the brother of the actor Terrence Stamp.  Lambert was the son of the music director of the Royal Ballet at Covent Garden.  Their love of art, music, opera, film and theater overlapped neatly with the shared visions of Townshend and his mates, and soon they replaced Meaden (for a buyout of £200) as the band’s managers.  Lambert and Stamp quickly and happily suggested they change the band’s name back to The Who.

The Who played in the shadows of the heavyweights of the club and auditorium scene, opening for The Beatles, Dave Berry and The Kinks.  They gleaned valuable information and lessons from the older musicians and the pros, improving their style and their sound along the way.  Eventually, Lambert and Stamp got The Who an audience with Shel Talmy, producer for The Kinks and a major player within British music circles, and the group chose one of Townshend’s earliest original rock compositions, “I Can’t Explain” as their audition song.  It was another big break—Talmy immediately booked them studio time to record the song.  Unsure of Townshend’s guitar skills, Talmy had a young Jimmy Page play the guitar instead.  Once recorded, the song languished for a few months while The Who continued their live shows.

In the meantime they picked up regular Tuesday gigs at The Marquee, a well-known jazz and blues club.  It was here, among throngs of London’s hippest mods, that The Who gained real traction.  The crowds grew, as did the group’s musical power, and Townshend and others began to expand the soon iconic use of Union Jacks as logos, military medallions, the fanciful tall-letter Who logos, and the RAF concentric circle designs on posters and handouts.  The now famous Marquee poster, “Maximum R&B,” originated with a design for those very shows.  And in the book Townshend says that it was at those game-changing shows at The Marquee when he realized, a bit nervously perhaps, “that Mod had become more than a look.  It had become our voice, and The Who was its main outlet.”

Soon, “I Can’t Explain” became a minor hit, and shortly afterward they recorded “My Generation,” the roots of which, Townshend explains, was found in his first wordplay doodling for “I Can’t Explain.”  Thus, at the very start of their musical success, The Who had already found its inner voice and its thematic backbone—a linkage between the mixed emotions, displacements and angst of youth, and the need to sing about it through the energetic vehicle of rock and roll.  By the time The Who gets around to recording “Magic Bus,” they are already making history, and their music is moving up the charts in the U.K.

The book offers a fascinating portrait of a band moving along the path from obscurity and subculture identity to the same band destined to produce the resonate, enduring and transcendent hard rock of “Won’t Get Fooled Again” and “Love Ain’t For Keeping,” and the epic, operatic songs like “Pinball Wizard,” and “We’re Not Gonna’ Take It” found in Tommy.

To be sure, Townshend offers plenty of clear-minded and unapologetic anecdotes about alcohol, drugs and sex, along with enough reckless driving and minor accidents that he would eventually lose his right to drive in the U.K.  There are the stories of the fights—verbal and sometimes physical—in studios and on stage.  There are the ongoing management and legal struggles, typical stuff in the business side of rock and roll when young adults, caught up in partying and girls and fast cars, sign complex agreements and contracts, and, as a result, there are the eventual constant tensions between Townshend, Entwistle, Daltrey and Moon, and their long-time managers, Kit Lambert and Chris Stamp.  For Townshend, there are self-doubts, there are evictions, there are appearances in courtrooms, and there are bitter arguments with girlfriends and wives.  There is depression and there is fear that the band “will implode.”  Later, by the mid-1970s, there is the issue of his hearing loss and tinnitus, which will lead some in the press to make him the poster boy for the dangers of loud music.

But what makes the book revealing is how he seeks throughout his life to find his way back toward his love of music.  After several marriages and a number of mental breakdowns, his eventual sobriety can be seen as the final destination of a spiritual quest.  He has near-death experiences from overdoses of LSD and from alcohol poisoning, not to mention the time he nearly killed himself by jumping from a hotel window into a swimming pool.  As with many rock stars, the Grim Reaper seems to hover just off stage.  The loss of so many of his peers—Brian Jones, John Bonham (he doesn’t mention Bonham specifically despite several references to Led Zeppelin), Jimi Hendrix, various producers and engineers, and his band mate Keith Moon—along with the near loss of Eric Clapton and others, amplify his own near death experience in 1981 when his alcohol abuse finally combines with his excessive cocaine consumption.  He collapsed in a bathroom of a London nightclub, barely breathing and with his heart rate halted.  He was rushed to a nearby hospital where he awoke only after doctors inserted an adrenaline needle into his heart.

Keith Moon’s death in September 1978 briefly shattered the cohesion of the band and shook the world of rock music.  Moon died from an overdose of sleeping pills, muscle relaxers and sedatives, which he had washed down with a bottle of champagne.  It was a devastating moment for Daltrey and Entwistle.

Where Tommy and Who’s Next had placed them on top, the death of Moon threatened the existence of the band forever.  It had been Moon’s manic, madman’s energy which acted as a theatrical, spirited foil to the other signature performances on stage: Daltrey’s singular and soulful voice, Entwistle’s stoic but powerful bass line, and Townshend’s masterful guitar handling.   But Townshend reveals that his mind went briefly in the opposite direction, immediately urging the others to join him on the road with more touring.  “Without grief, in its usual manifestation, I had to find a different way to deal with my loss.  You can say I was in denial.  Keith had been a pain in the ass, but he had also been a constant joy.  Once he’d gone, something irreplaceable was missing…all that was left was a sense of his ghost, playing the drums, laughing as he played ‘Who Are You’ with his earphones on fire.”

After Moon’s funeral, Phil Collins, then touring with Genesis, called and offered his services as drummer.  Other percussionists offer their help as well, but Townshend had already made up his mind that Moon’s successor would be Kenny Jones.  At about that same time Townshend meets Sex Pistol member Johnny Rotten, who is discussed for the lead role in the film version of Quadrophenia.  Townshend reveals that shortly afterwards, tension is already brewing in the studio between Daltrey and newcomer Jones.  Daltrey regarded Jones as talented and solid, but not made of the same energetic stuff as Moon.  But for Townshend, Moon’s sudden and painful exit becomes a kind of liberation, and he sees the band as able finally to move on into its next chapter.

Along the way Townshend weaves the backstory of his love life, his marriages and his affairs into the musical progression of the band and his own creativity.  Like many newly-mellowed rock icons of the era, he seeks neither to sugar-coat the facts nor justify his bad behavior, and he admits to being a selfish oaf and a bore on many occasions.

Though in the end, Quadrophenia was met with warm—but not overly enthusiastic reviews—it nevertheless marked the end of one era of The Who’s music, and the inevitable start, perhaps, of another.  Musically, Quadrophenia strikes a rich chord, just as all previous Who albums had, but its production was plagued with technical problems from the start.  Only in the 1990s would the whole Quadrophenia concept get repackaged and rebooted with some degree of success.  (The Who recently concluded a summer 2013 European Quadrophenia tour built largely around an updated music and multi-media production as a celebration of the Quadrophenia era.)

Further, in the book Townshend explores his own growing concerns and recurring fears throughout the 1970s that The Who might grow old—too old, in fact—to remain connected effectively to the band’s roots in the subculture that was the youth movement of the 1960s.  Even by the beginning of the 1970s young people were moving in a variety of directions—the American and European hippies, the expanding anti-Vietnam War movements, the more radical processes found in the Black Power and underground groups, and the bohemian and utopian movements.  Then, to confound the experts who thought they understood “youth culture,” millions of American and British young adults became conservatives, or at least conformists, making much of the ethos embraced by bands like The Who seem irrelevant even as the songs continued to pay royalties.

Still, in the end, the music mattered most, and The Who did endure, even well into the aught years of this century.  Like the biggest of the supergroups, The Who gets it moments of worldwide iconic durability and commercial nostalgia—Super Bowls, Olympic Games, hurricane relief concerts.  As for those dozens of sprawling rock documentaries and pop music histories I mention at the beginning of this essay, The Who’s long shadow, and especially that of Townshend, stretches across some part of each of these retrospectives.  To the very last, all of these rockers make at least some mention of Townshend’s deep and inescapable influence.

One phrase Thursday Review readers may tire of seeing me employ in reviews is “a fast read,” or, sometimes the word “readability.”  Some books are dense, some books are slow—which is not to say they are bad, merely slower than others.  I gave up on Andersen’s bio of Mick Jagger after only a couple of days.  It was slow, and pointless, and had little to do with the music.  Townshend’s book is a fast read.  It was delivered to my door on Wednesday, and by Sunday morning at 11:45 I had completed nearly all 500 pages.  And it had everything to do with the music.

The book illuminates Townshend’s restless energy through the decades, and we see that there is no artistic or creative impulse left behind—art, graphic design, theater, film and television, recording technology, books and publishing, classical and baroque music, opera, charity and philanthropy, political understanding and awareness, religion and spirituality.  The central thread, however, is that great fusion that was rock and roll—swing, country, jazz, R&B—melded together as only he could envision it; energies and frustrations of youth, a bit of distortion, a touch of feedback, and a few hundred smashed guitars along the way.

In this sense Pete Townshend has few rivals from his generation.

*Other sources used for this article include Before I Get Old: The Story of The Who, Dave Marsh; St. Martin’s Press, 1983.  The title of this article is an excerpt from a 1970 interview Townshend gave to Rolling Stone.  The full quote, as cited in Who I Am, is as follows: “I believe that rock can do anything, it’s the ultimate vehicle for everything.  It’s the ultimate vehicle for saying anything, for putting down anything, for building up anything, for killing and creating. It’s the absolute ultimate vehicle for self-destruction, which is the most incredible thing, because there’s nothing as effective as that, not in terms of art, or at least what we call art.”)

See more at: http://www.thursdayreview.com/TheWho.html

What is Gained or Lost With Drones?

mq1_predator

By R. Alan Clanton, Thursday Review Editor

July 21, 2013: At the height of the Vietnam War, President Lyndon Johnson faced daily pressures to escalate the level of combat operations, especially aerial bombing, then seen as a relatively “safe” U.S. tool to deploy against the North Vietnamese. Johnson, who famously fretted about the heavy bombing and often micromanaged the targets, signing off on them only after personally reviewing maps and aerial photos, worried rightly about three kinds of blowback: the bombing might trigger a wider war with the Chinese or the Soviets if their personnel were killed or assets destroyed; American pilots would be at grave risk, even at high altitudes, from the Russian-supplied anti-aircraft weapons in use by the North Vietnamese; and the collateral damage to civilian areas could be severe.

On this third point, Johnson worried mightily and continuously: even one mismanaged bombing mission could lead to disaster—residential areas, schools, hospitals, markets would be destroyed, along with the inevitable loss of life among the civilian population. There was no easy solution: low altitude bombing using lighter, faster jets would increase precision, but would put more American pilots at risk; high altitude bombing would lower the risk to U.S. airmen, but would reduce the accuracy of the ordnance dropped.

Johnson was never able to reconcile his inner torment, for neither path was ideal in the highly unpredictable and incalculably complex endeavor of warfare. And indeed, despite what were cutting-edge targeting technologies available at the time, civilians were killed in U.S. air strikes over North Vietnam. Fighters and bombers were shot down, and many of the pilots who survived crashes or ejections then spent years in POW camps. Johnson was never able to sanitize the process, nor remove the risk.

Hawks reasoned that the bombing was an essential tool to force the North into taking negotiations seriously, and as a way to demonstrate U.S. commitment to victory. Doves insisted that the bombing was only a technological form of terror, and that it solidified civilian allegiances—aligning the peasant population and middle-class elements with the Marxist-Leninists in Hanoi. Even after the war, the debate would not be settled.

Decades would pass, and then, during Operation Desert Storm, television viewers the world over would watch as the newest technologies enabled smart rockets and missiles to become routine instruments of war. Cruise missiles launched from battleships in the Persian Gulf could be monitored from high altitudes, or from satellite observation positions in space, as they found their way to their intended targets—aircraft hangars, Iraqi bunkers and command posts, tanks and vehicles, fuel depots. Neither Saddam Hussein’s army, nor his much ballyhooed, elite Republican Guard were any match for the dazzling hyper-accuracy of the smart ordnance the U.S. and its allies deployed, and these weapons were widely credited with ensuring Allied victory in the liberation of Kuwait.

But Desert Storm may have been the last major conflict in which enemy combatants could be easily distinguished by their uniforms or their combat vehicles.

In the current War on Terror, a Tomahawk cruise missile would be a costly, clumsy form of overkill—antiquated, in fact–for the mission U.S. commanders face. American soldiers in Afghanistan have no way to distinguish friend from foe, nor an easy way to spot the Taliban or al Qaeda operative standing amongst a group of Afghan men on a street, sitting in a coffee shop, or riding in the back of a truck. Even Afghans in the uniforms of police or military can turn, and in recent months many U.S. soldiers have been killed by presumably “friendly” Afghans, and a decade of war in Afghanistan has brought little progress in making distinctions between Qaeda combatants—real or potential—and civilians.

Over the long course of the war in Afghanistan, as it became apparent in the last years of the administration of George W. Bush and the early years of the administration of Barack Obama that combat operations on the ground would be endlessly mired in the conundrum of sorting civilian from combatant, the use of smarter tools of war came into prominence. Among those high tech tools were drones—small, light, unmanned flying machines designed originally for reconnaissance and surveillance, but easily adapted to carry powerful weapons.

Although these remotely-piloted devices were used only occasionally by Bush (most estimates suggest that between 40 and 50 drones were deployed for lethal purposes between 2004 and 2009), their use has increased dramatically in the Obama years. Drones can now be credited with as many as 3300 killed in all operations across the wider jihadist world—Iraq, Afghanistan, Pakistan, Yemen, Saudi Arabia and other locations. Drones have also been deployed in large numbers for surveillance and intelligence-gathering over many other nations, including Syria, Jordan and perhaps Iran.

On the Obama scorecard, the use of drones may be in fact his most hawkish tendency, placing him squarely upon the foreign policy footprint of the neo-conservatives so widely derided in many of the public debates and media discussions of the last five or six years. For most progressives, this predisposition toward high tech killing devices is perhaps even more troubling than Guantanamo. Liberals and civil libertarians see drones as the most slippery of all slopes, a high tech device easily transitioned from military to civilian applications. For many conservatives of the libertarian stripe, the use of drones raises a variety of timely ethical questions, all made more urgent because of the recent revelations of government intrusion: NSA domestic spying, IRS overreach and political abuse, Justice Department inquiries into reporters’ phone records, even the recent fracas over police department. During recent Senate hearings, FBI officials admitted that surveillance drones have been used over U.S. territory for law enforcement and tracking purposes, and NSA officials have acknowledged the use of drones in the skies of even our closest allies.

Like any newly-deployed weapon, drones raise obvious questions about what constitutes “fair” or “ethical” combat and warfare. Defenders of the use of drones point out that these arguments can be traced back centuries: artillery, machine guns, tanks, airplanes, submarines, atomic bombs—all have been at one time or another declared unethical as weapons. Opponents of drones, however, suggest that remotely-guided flying machines are nothing more than the ultimate form of detachment from ethical war—a step toward the total erasure of scruples or human interdiction in the choice of who will die and who will survive.

Military commanders in the field admit that although civilians are sometimes killed by drone strikes in places like Afghanistan and Pakistan, those occasions are rare, and the percentage of collateral death is actually lower than what is wrought by more indiscriminate forms of combat, such as bombing, targeted missile-strikes and even boots-on-the-ground operations. More importantly, say intelligence and terror experts, drones have created fragmentation among Taliban and Qaeda leadership by making it virtually impossible for more than two or three operatives to gather in one spot for any length of time. Marked individuals, those with connections to terrorists, can be tracked—often easily—and the result has been a dramatic increase the number of Qaeda assets killed. Those among the civilian population know this, and keep their distance from any Taliban gathering or any known terror cells. This creates isolation, and weakens command-and-control. Coupled with data-mining and other high-tech tracking tools, terrorists must refrain from nearly all forms of electronic activity—no laptops, no wireless internet, no use of cell phones or handheld devices. And since gathering in one spot for more than a few minutes is now dangerous, no newly produced recruitment videos have emerged—those infamous short films of dozens of young recruits in ski masks tumbling, crawling and leaping with rifles. And if there are no videos, there are surely no secret training camps.

Supporters say that drone search-and-kill operations work with frequent success. Just a week ago the U.S. military announced the death of Shaeed al-Shihri, a radical, violent militant with direct ties to several terrorist plots and activities in recent years, including foiled “underwear” bombs in England and the U.S.

Al-Shihri had at one time been detained by the American military and held in Guantanamo. After his release, he returned to his homeland of Saudi Arabia, and shortly afterwards he went back into the terror business in Yemen, quickly emerging as the de facto leader of al Qaeda on the Arabian Peninsula. His death would seem to confirm what drone advocates have said: drones work effectively, and undercut Qaeda efforts to groom and sustain leadership.

For most supporters of drone deployment, the bottom line is sound and compelling: fewer U.S. boots on the ground mean fewer deaths of American combat personnel; drones allow the U.S. to avoid being drawn into the complex, often intractable issues of local disputes; and drones are highly precise tools, ideal for the decapitation of terrorist leadership and with a relatively low chance of tragic spillover into civilian life.

But critics suggest that drones do not have the high success rate and low collateral percentages their enthusiasts boast of so frequently. Further, opponents of drone use point out several key factors: drones have not inhibited swift replacement and promotion of new terror leaders after others are killed; when harassed, terror operations simply move to new areas (as in the recent migration of Qaeda leadership from Pakistan to Africa and the Arabian Peninsula); and drone strikes have done little to blunt the marketing outreach by Qaeda and Taliban operatives working on the web. In fact, Qaeda and Taliban operatives routinely post videos of drones attacks and photos of the aftermath of drone havoc to promote the cause of radical jihad to new recruits and potential supporters among the population. These images of the drone attacks often generate more sympathy, online activity and recruitment success than those slick training videos with their uplifting martial music and their fighters wearing scarves and bandanas. Critics say that this fact alone demonstrates the failure of drones to significantly reduce potential terror.

From a military and foreign policy standpoint, nothing has been more troubling to American progressives and western liberals—save perhaps Guantanamo—than President Obama’s total embrace of drones. As the high cost and political pain of two wars (now currently only Afghanistan) wore thin with Americans, Obama and his Pentagon chiefs quickly escalated drone activity from a secondary to a primary tool of war. Critics point at that where Bush used drones only sparingly, and only after careful scrutiny of the high value of the target involved, the Obama administration has lowered the bar substantially, ordering strikes on secondary and tertiary targets, or for that matter, any gathering that is deemed suspect by intelligence analysts. Despite the 2011 claims by John Brennan and others that drones had resulted in few, if any, collateral deaths (a claim which seemed suspicious even to conservatives and neo-cons at the time), video and photographic evidence indicates that civilians have been killed on multiple occasions, whether by technological failures or by mishandled intelligence. This feeds the propaganda campaigns of terror networks and enables Taliban and Qaeda recruiters to portray the American embrace of drones as a form of high tech terror designed only to kill Muslims.

Further, opponents of drone deployment say that the claim that drones have had significant success in thwarting potential terrorist activity is spurious, at best. Since terror plots are conceived and executed in cellular, small groups using improvised, out-of-the-box tactics and tools (the Boston Marathon bombers used cheap pressure cookers and backpacks; al-Shihri’s operatives used underwear) the notion that intelligence gleaned online or from cell phone activity can lead to targeted drone success somewhere in Afghanistan or Yemen is neither verifiable or measurable. Drones simply provide a lazy, detached way to eradicate pockets of “suspicious” activity in some remote location.

Because a significant number of drone strikes now take place in Pakistan, Yemen, Somalia, and along the southern rim of the Arabian Peninsula, liberals contend that U.S. drones act as a proxy weapon in support of odorous, military regimes. Writing in the most recent edition of Foreign Affairs, Audrey Kurth Cronin says drones have in effect become “remote-controlled repression.”
“With its so-called signature strikes,” Cronin writes, “Washington often goes after people whose identity it does not know, but who appear to be behaving like militants in insurgent-controlled areas. Worse, because the targets of such strikes are so loosely defined, it seems inevitable that they will kill some civilians.”

This conundrum of war remains constant: no matter the precision of the weapon, its lethal effects will surely spill over into civilian areas, with disastrous results. Lyndon Johnson famously agonized over the ethics and the blowback of bombing, though his successor Richard Nixon had little compunction over their extensive use in Southeast Asia. After 9/11, George W. Bush, the neo-con unilateralist, used drones surgically and sparingly; his successor, the internationalist conciliator, has exchanged the toxic baggage of a decade of costly conventional ground war with the low-cost, expedient of a remotely-controlled operation.

Meanwhile, the U.S. military brass is working overtime to expand the scope and flexibility of drones. Just last week the U.S. Navy conducted more field tests as technicians sought to successfully remote-pilot a drone from take-off to landing on the deck of a moving aircraft carrier. Though landing a returning drone was the most complex problem to solve, the July 10 successful landing of a drone onto the deck of the George H.W. Bush demonstrated to the Navy that it was possible (other recent landing attempts during testing were aborted).   The U.S. Navy says it intends to continue work on perfecting drones for fleet applications, with the immediate goal of incorporating drone technology and deployment platforms into its new Ford-class aircraft carriers, named for the Gerald R. Ford, which is scheduled to be christened later this year.  Ford-class carriers will all be fitted for extensive drone deployment.

But for U.S. libertarians on both sides of the political aisle, and both sides of the age old hawk versus dove debate, drones raise more troubling issues than their relative success on the battlefield of Afghanistan, or in the rugged and remote regions of the world’s most lawless locales. The frank admission by FBI and NSA officials that drones have already been deployed by intelligence agencies and law enforcement over American soil, and the soil of some of our allies, escalates the ethical and constitutional questions of remote-controlled surveillance and interdiction to a timely status. Local and state police have recently deployed drones for tasks ranging from traffic enforcement to drug interdiction, and from vehicle identification to the tracking of individuals, in some cases coupling data harvested from new squad car tag-readers with computer systems which can quickly direct the drones to their intended target or location.  When residents and local officials in one Colorado town approved an ordinance allowing citizens to use their personal guns to shoot down snooping drones, the FAA and the Justice Department weighed-in with an immediate warning that downing a drone could be a federal offense.

Many conservatives and liberals were shocked in recent months over the casual, even arrogant responses by public officials of the Justice Department, NSA and IRS when—in their appearances in front of Congress—they told of their agency or department’s rather elastic and unrestrained interpretations of the U.S. Constitution. The NSA’s domestic spying program—harvesting massive troves of information from the cell phones, laptops, emails and downloads of average Americans—seemed the most troubling permutation of the problem, and a clear indication that drones, like many other technologies, might be all-too-easily be adapted for homeland surveillance and even social and behavioral restrictions.

Questions of the use of remotely-controlled flying machines are not easy to answer, and inevitably lead to the age-old balance between enhanced securities and eroded liberties, lives saved versus lives lost, and the intractable problems of preventing civilian deaths in a world in which terrorism erases the traditional boundaries of battlefields. If used domestically, drones are surely that genie which we will find impossible to put back in the bottle—capable of remarkable feats with the flick of an eye, but also perhaps an expedient agent for the erasure of privacy and individuality.

See more at: http://www.thursdayreview.com/Drones.html