Archive for July, 2011

Do something

July 25, 2011

In 1992 Somalia provided the world’s imperial powers with a testing ground, and an opportunity — with the strategic field recently swept clear of Soviet counterforce — to establish a useful precedent.

The UN World Food Program, and private NGOs and international aid agencies including CARE, Oxfam and Human Rights Watch, publicly and successfully advocated military intervention (‘peacekeeping operations’) with the declared purpose of providing humanitarian assistance and disaster relief to famine victims.

Over the subsequent two decades, blue helmets and US Marines have been deployed here and there throughout the world’s ‘crisis zones’ (Haiti, Bosnia, Timor-Leste, etc.).

Their personnel have been granted immunity from domestic law of the host state via Status of Forces Agreements and freedom from the Fourth Geneva Convention, because they aren’t quite occupying forces, exactly.

An ideological quest had begun following the Biafra secession conflict to legitimize, among North American and European public opinion, extraterritorial military intrusions into formally sovereign jurisdictions.

By the early 1990s this project sans frontières  — taken up in the relevant chancelleries, foreign ministries and top military echelons — had marshalled the support of relief workers, civil servants, journalists and activists. Almost the entire left-liberal commentariat had climbed eagerly on board by the time the ‘humanitarian’ ship set sail for Kosovo, East Timor and Libya.

In Somalia, UNOSOM, UNITAF and Operation Restore Hope were mandated to ‘create a secure environment for the delivery of humanitarian relief.’

The basis for these calls during late 1992 was that relief food was being looted on a grand scale. This was based on the (exaggerated, as it turned out) say-so of relief-agency heads, and lurid media photographs of starving children and General Aidid’s gun-toting thugs on the back of pickup trucks.

Together these pricked, as is said, the moral conscience of the ‘international community’.

Meanwhile the absence of a stable local government removed many of the administrative checks on aid-agency field missions. NGOs could consequently bypass the usual host-country bureaucratic encumbrances, dealing directly on each side with donors and the needy, and expanding their usual roles.

The stakes thus upped, NGOs jockeyed for publicity and position as the favoured lead agency for receiving and disbursing donor resources. Hence the ‘disaster pornography’ of helpless victims, evil warlords, etc.

The President of CARE International at this time was former Australian Prime Minister Malcolm Fraser; his daughter, who worked in the marketing and communications department for the organization, and later as its head of fundraising, contributed dramatic pictures and interviews for The Australian newspaper.

This inflamed collective conscience survived the bloody reality of peacekeeping — details of which, in 1993, even pliant foreign correspondents could not keep from readers.

Far from being an impartial and apolitical provider of humanitarian needs — merely giving corn from Kansas and wheat from Saskatchewan — relief agencies were used as policy instruments through which interested external powers could take sides between belligerents fighting over the scraps of a collapsed state located in strategic territory at the mouth of the Red Sea, with a separatist Somali minority nearby to the north in oil-rich Ogaden, and with a vast Indian Ocean littoral.

On the pretext of protecting aid supplies, the troops despatched were providing military advantage to their favoured adversaries.

In June 1993 the Chicago Tribune revealed blatant war crimes:

The scene at Digfer Hospital, Mogadishu’s biggest medical facility, speaks volumes about the dilemma confronting the United Nations as it tests the uncharted waters of its new power to use military might to enforce peace.

There is a gaping hole in the wall of the recovery room, where three anesthetized patients had been lying when a projectile struck. The ground floor main reception area is a jumble of glass shards and twisted shrapnel from another bomb.

Hospital director Dr. Fuji Mohammed says three women were killed when the bomb exploded. He says the hospital received six or seven hits, but he does not know the total number of victims because patients and their relatives fled when the attack started, taking the dead and wounded with them.

The United Nations flexed its newfound muscles in Mogadishu last week with devastating, and potentially disturbing, effect. But the message is clear: If the UN is going to enforce peace, rather than simply monitor it, people will die.

Hospitals will be hit, missiles will go astray and kill children and the UN itself will become party to at least some of the horrors confronting the world.

The UN says Digfer Hospital was attacked only because gunmen loyal to Gen. Mohammed Farah Aidid, the warlord whose power it is seeking to destroy, had holed up there and were attacking the blue-helmeted peacekeepers.

It probably will never be known how many Somalis died in the UN onslaught against Aidid. In Somalia, deaths are not registered and the hospital has ceased to function. Anyway, those killed in the fighting simply would be taken home to be buried by their families.

But the indications of civilian casualties are high, running into the dozens or more. Five peacekeepers – four Moroccans and a Pakistani – also gave their lives in the drive to neutralize the warlord who had plagued the international community’s efforts to protect relief supplies to starving Somalis…

The normal rules of engagement do not apply in this nation, said a senior UN official. Militias fire on peacekeepers from a hospital. Women and children mingle with gunmen on the capital’s densely populated streets, and sometimes help them attack UN soldiers.

“No one has trained our troops to fire on women and children,” the UN official said…

If the UN was to retain any credibility in Somalia, it had to strike back, and strike back forcefully, said Mike McDonagh, head of operations for the Irish agency Concern.

The withdrawal of US soldiers and marines followed the notorious Battle of Mogadishu — portrayed in Ridley Scott’s equally notorious film Black Hawk Down — an experience which contributed to all the subsequent work on Military Operations on Urban Terrain.

This strategic focus in turn led to ‘pre-emptive preparation’. The latter involved equipping ‘potential hotspots around the globe’ with ‘secure, surreptitious’ unattended ground sensors, development of non-lethal (i.e. acoustic and EM) ‘access denial’ weapons and deployment of remote guided munitions that could survive the ‘urban canyons’ and signals disruption of the Pentagon’s declared new ‘battlespace’: teeming slum-filled metropoles like Lagos, Jakarta and Karachi.

This project led ultimately to the likes of Operation Phantom Fury, when a Carthaginian peace was imposed on Fallujah.

In decades past, leadership of the Somali state meant control over almost the entire surplus product: under President Siad Barre, aid revenues largely financed a bloated security apparatus and bestowed favours via a patronage network.

Those close to the government (through clan ties or military service) were granted private title to irrigable land, newly-enclosed property which then could be used as collateral for loans from foreign donors. These borrowed funds financed control over the livestock trade (in the north the best land was used mostly for grazing herds, and alluvial soil in the south used to a lesser extent for cash-crop plantations, rather than agriculture for subsistence or the domestic market).

Land speculation, and the abolition of grain-price ceilings in 1984, saw hyperinflation emerge just as crops failed in the late 1980s.

Smallholders were dispossessed and became rural labourers for the wealthy, or part of the immiserated ‘informal’ sector in the cities.

Food prices spiked, as did the cost of land in Mogadishu as members of international relief and technical-assistance organizations sought housing and other amenities.

Rich merchants, army officers and urban elites thus ‘got rich quick by milking the cows of international refugee relief, foreign development aid, and military assistance’, allowing them to ‘educate their children overseas, build lavish villas for themselves or for rental to foreign expatriates, and purchase fleets of cars with hard currency that was supposedly scarce in the country’ (Lee Cassanelli).

Famine conditions thus arose through activity which was supposed to forestall famine: funded by a massive monetary inflow (Somalia has been the largest per capita recipient of foreign aid in Africa since the 1980s) a rent-seeking mercantile and political class, swapping USAID dollars for Somali shillings, taking kickbacks from ‘development’ contractors delivering overpriced products, and generally buying low and selling high, enriched itself and took over the country’s productive assets (i.e. fertile land) for external trade and domestic speculation.

This engrossment of people’s livelihoods produced more of the captive needy for relief organizations.

This logic has more-or-less survived down to the present day.

But the collapse of the state in 1991, and the influx of relief organizations and aid, meant direct physical control over ports and airstrips became a new source of wealth and power. Banditry — imposition of roadblocks and blockades — plus capture of humans as booty — in other words, slavery — and other forms of asset stripping and primary accumulation by violence have arisen.

Thanks to the funds channelled inwards by the aid industry (representing claims on wealth impossibly vast compared to the size of the physical economy itself) the conquest of Somali state power has presented such an anticipated prize that no group has, for two decades now, been able to achieve it.

In the north, semi-independent Somaliland and Puntland have revived husbandry and trade in livestock, and Berbera port in the Gulf of Aden has re-opened, restoring links to the Ethiopian market and to wage remittances from southwest Asia and elsewhere.

Dahabshiil, a funds-transfer company headquartered in Dubai, services a worldwide Somali diaspora along with the international organizations based in the Horn of Africa. It recently has diversified into banking and telecommunications.

In the breadbasket Jubba and Shebelle valleys of the south, things are much grimmer.

Today, near the Somali border with Kenya, the US is firing Hellfire missiles at al-Shabaab ‘militants’ from unmanned airfcraft. In Somali waters, US naval vessels provide floating offshore facilities for secret detention and prolonged interrogation of alleged ‘enemy combatants’ captured in third countries. The CIA has recently been revealed to operate a similar facility in Mogadishu itself, to which people are rendered from across East Africa.

Somalia is the focus of Washington’s programme of maritime interdiction in the Horn of Africa, ‘piracy’ providing the pretext for a continuous naval presence in the waters just south of Yemen, in the sea route linking China, India and Japan with their main energy providers.

Stratfor Somalia

At roughly the same latitude, on the west coast of Africa, the Gulf of Guinea is a new strategic priority, thanks to its energy resources, Chinese inward investment and consequent influence in Gabon, Equatorial Guinea and nearby Angola and Zambia, and the chronic low-intensity violence, oilfield sabotage, gangsterism, money laundering and state instability in Nigeria.

As China’s working-age population peaks over the next decade, Chinese firms will increasingly look to invest abroad. Africa will be one of the few regions offering competitive rates of return, and capital inflow will likely allow some new industrialization.

But it is unlikely to bring ‘development’ of a helpful sort for the agrarian smallholders and pastoralists of the semiarid Horn of Africa and the savannahs of the Sahel. The quantity of fixed capital required to develop infrastructure from its current state, and the labour reserves available, would make anything besides agribusiness an unprofitable and univiting prospect.

Thus the most likely outcome is development of irrigation and other hydraulic-management infrastructure (barrages, canals, wells, pipes, pumps, etc.) in the alluvial plains of Somalia’s densely-populated riverine south to a level sufficient to allow banana and sugar plantations to produce for export markets.

Large industrial farms (aziende) were run in the past by Italian colonists and, more recently, firms such as De Nadai (through its subsidiary Somalfruit) and Dole Food Company (Sombana), the world’s largest producer of fresh fruit and vegetables. These survived during the 1990s, when the companies contracted local militias to capture port facilities and oversee forced labour.

But since state breakdown the stock of equipment (e.g. tractors, diesel fuel) and infrastructure has run down due to looting and lack of investment. The agricultural surplus has been extracted as protection payments and ‘tax’ at the point of a gun, and has been squandered by militias on weaponry and other forms of unproductive consumption. This has left irrigation channels vulnerable to violent ENSO and Indian Ocean Dipole events (and perhaps human-induced climate change), alternately bringing floods and disruption of the monsoon.

For decades now, technical-assistance manuals put out by development agencies (UN, EU, etc.) have stressed that the cash-crop sector can only be revived by ensuring secure property rights, leading to renewal of the water-delivery infrastructure, mechanized production and scale economies.

Should this vision be realized, it will mean displacement for the small-scale commercial farmers of maize, fruit trees, tomatoes, rice and groundnuts, and for the subsistence pastoralists raising livestock for milk, etc. Large-scale capital-intensive production using wage labour will disrupt existing rural society, eliminating whole social layers, just as Italian colonialism broke up the previous pattern of communal, open-access landholding.

The smallholding peasant class will be propelled into urban centres such as Mogadishu, Baidoa and Kismayo, where its members will face an uncertain fate, lacking jobs, other means of subsistence, sanitary waste disposal and other basic utilities.

But even this scenario, given the current state of things, supposes an inflow of development grants and long-term loans.

How likely is it that a capital-exporting country would be allowed to build up a stock of assets in a country surrounded on all sides by close allies of Washington, and subject to recurrent military raids by one of them (Ethiopia) and by the US itself?

The US has shown itself concerned enough about Chinese influence in Sudan to allow partition of that country, leading to probable revision by South Sudan’s energy ministry of existing contracts with CNPC, Petronas and ONGC, and possible diversion of downstream activity away from the existing refinery in Khartoum and export terminal at Port Sudan to a pipeline through Kenya.

Chinese construction services, infrastructure investment and influence-peddling in Somalia are surely unwelcome; Washington would prefer the managed disorder of ongoing civil war, periodic famine and Bono’s appeals for emergency relief.


Our favourite peons

July 12, 2011

From Ross Perlin’s new book on unpaid internships, ‘the principal point of entry for young people into the white-collar world’, ‘a form of mass exploitation hidden in plain sight’:

In much of the developed world, the subtle, relentless pressure to do an internship is now a crucial part of being young…

Internships may be everywhere today, but they remain such a recent, chaotic phenomenon that there are seldom any rules of the road, any standards or codes of conduct that are honoured  only vague expectations, for which no one is held accountable.

Even the word intern is a kind of smokescreen, more brand than job description, lumping together an explosion of intermittent and precarious roles we might otherwise call volunteer, temp, summer job, and so on. Until just a few decades ago, the word referred almost exclusively to a particular brand of hands-on apprenticeship in the medical profession.

The internship has become a new and distinct form, located at the nexus of transformations in higher education and the workplace…

Today interns famously shuttle coffee in a thousand newsrooms, Congressional offices and Hollywood studios, but they also deliver aid in Afghanistan, write newsletters for churches, build the human genome, sell lipstick, deliver the weather report on TV, and pick up trash.

They are college students working part-time, recent graduates barely scraping by, thirty-somethings changing careers, and  increasingly  just about any white-collar hopeful who can be hired on a temporary basis, for cheap or for free. They’re our favourite peons, loaded with little indignities and pointless errands…

Even law firms specializing in employment issues have the gall to flout labour law by taking on unpaid interns and providing scant training.

In the last few decades, internships have spread to virtually every industry and almost every country, while internship-related businesses and campus career offices also proliferate (hawking internships, organizing internship fairs, declaring an “internship week” on campus, and so on)…

A half-century boom in nonprofit work and the much-touted blossoming of “civil society” have been powered unmistakeably by the internship explosion.

Indeed, if you go to the website of the Victorian Young Unionists Network, a creature of the Victorian Trades Hall Council, you will find internship positions advertised:

The Young Unionists Network was established five years ago because unions aren’t just for tuff old blokes. In fact, young people are most commonly the target of exploitation. Shift work, long hours without overtime, unpaid (illegal) “trial periods”, even dangerous working conditions: these are some of the issues young workers face and all too often accept.

That’s why we invited people to apply for our Union Summer three week internship programme  so you can learn how to fight back! Union Summer gives people committed to social justice and workers rights the opportunity to get involved in an educational internship working in trade unions with organizers and union activists.

We aim to bring unions together with young workers, students and activists and give them the opportunity to get active in organizing campaigns that address these issues  through an internship.

As Perlin mentions, the rise of unpaid internships and other low-paid or precarious work in the ‘career path’ of many young people in advanced economies is bound up with changes to higher education.

In recent decades, due to rising tuition fees and an inflated housing market, many young adults have been forced to finance current consumption by borrowing.

But, because they own fewer assets than later-middle-aged people, this is not collateralized borrowing. And debt is serviced not out of realized or prospective capital gains (i.e. sale of equities or appreciated real estate) but paid out of low and unreliable income streams.

The amount of prudent borrowing that can occur is thus limited. Liquidity constraints  inability to finance spending much beyond their immediate cash inflows  condemn many young adults to a fragile existence and low material standard of living.

Lenddo Friend-o-meter

Meanwhile, as the demographic structure of the advanced economies has changed, the number of working years during which employees can save for life-cycle reasons has become fewer. Net saving has been restricted to, at best, a few middle decades of an individual’s working life (despite mandatory contributions to private superannuation funds).

In Australia, which has one of the highest rates of casual labour as a proportion of the total workforce in the OECD, people aged 15-24 make up over 40% of casual workers. Across the advanced economies, the labour-market prospects for young people are atrocious.

The chart below shows the unemployment trend in Australia over the past thirty-five years.

Observe, in the chart below, the abrupt step-change in the labour-force status of Australian men aged 25-34 that occurred following the early 1990s recession, via the Keating government’s Working Nation policy response.

Working Nation was a ‘reciprocal obligation’ package of compulsory vocational training, activity tests, case workers (‘minders’ who offered ‘encouragement and moral suasion’, and whose work was ‘opened up to competition’ from privately-owned non-governmental contractors for the first time), along with Jobstart.

Under Jobstart, the federal government subsidized the wage bill of private firms who took on long-term unemployed young workers. Thus youth were channelled towards economically marginal enterprises that offered them sporadic, insecure and poorly paid entry-level jobs. (Wage subsidies, which ranged from $100 to $230 per week, expired after nine months, after which few employees were retained.)

Citing ‘strong community concern that some unemployed people are making insufficient effort to find employment’, harsher penalties were imposed on those who did not ‘accept any reasonable job offer.’ (Others were obliged to perform voluntary work to retain eligibility.)

A chief goal, averred the prime minister, was to ‘to encourage more substantial part-time and casual work.’

Working Nation’s explicit objective, amid mass unemployment on a scale unprecedented since the Great Depression, was ‘not to directly increase the total stock of jobs… Rather, the prime object is to improve the efficiency of the labour market/s by contouring the labour supply to match demand.’

The burden of this whipping into shape fell disproportionately on the young.

Keating - Working Nation

This has been a common feature shared across the advanced economies since the 1970s even as successive generational cohorts grow smaller due to falling birthrates. The long-term trajectory has been buttressed by cyclical factors: young people have fared the worst during the global slowdown since 2007.

In the United States, the unemployment rate for people aged 16-24 is now 19.7%, well above the all-ages aggregate figure, and the highest since 1948, when collection began. The U-6 unemployment rate (the sum of unemployment and underemployment, including unwilling part-timers, people discouraged for ‘job market-related reasons’ and not bothering to seek work, and other ‘marginally attached’ people, which in Australia is called the ‘labour underutilization rate’) for people in this age group is around 30%, compared to just over 16% for the population as a whole.

For people aged 16-19, the US unemployment rate increased by a third between June 2008 and June 2011; the proportion of the teenage US population employed has fallen from 35% to 25% over that period. The unemployment rate for African-Americans aged 16-19 is 40%, a figure which, of course, does not include those incarcerated.

In Canada, youth unemployment is also above 19%, in France it’s above 22%, in Italy, Belgium and Sweden one quarter of people aged 16-24 are unemployed and, in Spain, the rate has been above 40% for several years (compared to a current 21% for the workforce as a whole). All these figures are dramatically worse than in previous recessions (i.e. 1982 and 1991).

The situation is thus ripe for ruling social layers and their conduits in the media to adopt a divide-and-rule strategy, fomenting intergenerational discord.

OECD reports, taking advantage of today’s budget deficits to push for global privatization of public pensions, warn darkly of ‘intergenerational conflict over public resources’ if their message is unheeded.

Government pensions form a ‘stress point for relations between generations’, threatening ‘a breakdown… in the informal nexus of support within and between families which is so vital… in providing the essential glue which holds society together.’

The Cato Institute, more vividly, speaks of ‘war between the generations’ with spending on the elderly ‘set to explode’: ‘taxes to support growth in Medicare and Social Security will severely eat away at young people’s income in coming years unless those programs undergo fundamental reforms.’

‘Is war between generations inevitable?’ ask two economists from another US think-tank. Sadly, they conclude we are facing the ‘beginning of an enormous conflict over resources. Indeed, it is probably no exaggeration to say that we are approaching generational warfare.’

In the mainstream press, meanwhile, one opinion columnist will piously advise baby boomers to cede their ‘privileges’; another commentator, playing to a different crowd, will decry youthful ‘job snobs’.

The interests of youth, it will be explained, are opposed to those of their elders: each can only prosper at the expense of the other.

For are not houses inherently scarce, and aren’t there only so many good jobs to go around? Some order of precedence must be settled upon for allocating these scarce and valuable goods; and, well, the natural succession of human generations must come into it some way or another.

Given ready access to the media, such are the lineaments, fleshed out publicly in endless hearings, of a purported conflict of interest between generational cohorts: indebted students and marginally employed young people, on the one hand, assailing the complacent fortress of asset-rich retirees and middle-aged employees on the other.

What is, in reality, an inter-generational game of common interest (in which the vast majority of the population, sharing a life-cycle and few savings, would mutually benefit from an arrangement under which secure work and shelter were readily available to all at any age) is thereby transformed into a pure-conflict game, in which a finite pie must be divided, with gains for one party implying losses for another.

Respective payoffs in pure-conflict and common-interest games - Samuel Bowles

For such a barren vision to kindle any kind of popular response, ‘generational’ categories must already have acquired salience and become the stuff of affiliation. As luck would have it, this requires little dedicated effort or on-the-spot improvisation. Adventitiously, the conditions already exist.

In late capitalism, official civic pluralism (ideological celebration of postmodern ‘diversity’ and heterogeneity) is supplemented, in the commercial sphere, by targeted or ‘niche marketing’, which encourages fine-grained horizontal segmentation of consumers according to ascriptive traits (age, sex, ethnicity) or ‘identity’.

Tailored communication (including discriminatory pricing and product offers or ‘versioning’) proceeds along distinct, non-overlapping channels.

Product differentiation (which becomes ever more important as price competition wanes) encourages oligopolistic firms to ‘pack the product space’, introducing new products and brands to occupy each potential market niche and deter competitors. (On my latest visit to the supermarket, I counted 23 varieties of Colgate toothpaste.)

Thus we see the proliferation of spurious brand variety with meaningless attributes, claiming to distinguish more-or-less identical products via group-specific ‘psychographics‘.

Ross Simmonds - Gen Y psychographics

Finally, firms pursuing network externalities (in which there are increasing returns with the number of buyers of a product) seek to assemble consumer in-groups, knitted together through friendship, shared demographic characteristics or similar cultural outlook.

Group identity, naturally, depends on erecting barriers to entry that exclude outsiders (who do not possess the requisite quality or trait) from admission.

no old people

Such segmentation according to demographic category may overlap with labour-market segmentation.  (Besides the young, the most obvious example is women, who are disproportionately found in the least-skilled and worst-paid sections of the workforce, performing low-wage, high-turnover, messy and unsafe jobs. If one sorts Australian full-time non-managerial employees by mean weekly earnings, the lowest-paid occupations are textile, clothing and footwear trades, hairdressers, childcare workers, checkout operators, cleaners and laundry workers, receptionists, food-preparation and hospitality workers. Employees in these industries are typically women, while marketing for their products is also generally addressed to female consumers.)

As Adam Smith noted in his example of the philosopher and the street porter, the mere existence of a social division of labour imposes distinctions of ‘habit, custom and education’ between ‘men of different professions’, until they can acknowledge ‘scarce any resemblance’ between each other.

But Smith concluded such differences were ‘much less than we are aware of’: ‘By nature a philosopher is not in genius and disposition half so different from a street porter, as a mastiff is from a greyhound, or a greyhound from a spaniel, or this last from a shepherd’s dog.’

Yet today, in a vastly more advanced stage of what Smith called commercial society, the media and advertising industries accustom people to being addressed as part of a narrow, precisely specified demographic pool  distinguishing owners of mastiffs from those of greyhounds and spaniels  rather than as a common (class-divided) social aggregate with shared interests.

When opportunity arises, this pre-existing infrastructure of social distinctions can provide the basis for demagogic mobilization by political entrepreneurs (electorally or otherwise).

Obama 2008 vote by ageObama 2008 vote by age and ethnicity



Age, of course, is not a useful basis for in-group attachment and out-group mistrust, as one’s classification as young or old does not persist over time as one’s racial, religious or gender status mostly does. So ‘generational’ categories are used instead: ‘baby boomer’, generation Y’, etc.

As in other cases, this involves partitioning people according to some characteristic, encouraging affinity within groups and dislike between them, vilifying the subordinate group  ‘gen Y’ is ‘lazy’, etc  and calling forth a group of ‘leaders’ who can represent that group’s interests and be the ‘voice of youth’.

The group will be said to constitute a special interest that must appoint its own lobbyists to secure its share of the spoils.

Meanwhile the accrual of professional status and social privileges by this youthful aristoi or ‘talented tenth’ will, it’s claimed, benefit all or most members of the group. Thus, in Australia, Natasha Stott Despoja, Sarah Hanson-Young, Kate Ellis and Tanya Plibersek plead that only by advancing the career prospect of advocates like themselves  electing them to parliamentary assemblies, granting them scholarships or seating them on corporate boards of directors, including the youth minister in cabinet  can young adults as a group improve their fortunes.

In reality, neither the enrichment of a few young adults, proximity of ‘youth representatives’ to the levers of power and channels of influence,  nor the increased ‘diversity’ (age-wise) of the state elite and propertied classes will alter the increasingly tenuous and put-upon status of young people in education and the labour market.

‘Youth representatives’ may, to a limited extent (and supposing their generational solidarity is sincere and not cynical careerism), pressure the state to spend in certain areas, pass favourable legislation, and bestow patronage upon their allies.

But by necessity, and to a far greater degree, their range of possible actions will be constrained by, and not allowed to conflict with, the fundamental needs of an economy that has over recent decades decreed that young adults are to be indebted, savings out of income made impossible, their jobs insecure and labour performed for little or no compensation, necessities (health, education, shelter) placed out of or barely within reach, and their living standards lowered.

Smoke and mirrors

July 7, 2011

A just-released PNAS paper by Robert Kaufmann and others claims to account for observed global surface temperatures since 1998 by including in its model the ‘global dimming’ effect of sulfate aerosols emitted with the combustion of coal, oil and natural gas, and the smelting of copper, zinc, lead and nickel.

It has attracted some attention from the mainstream news media (which welcomed it either as a chance to tut about Chinese pollution or spout hogwash); so too have recent warnings of drought and near-famine conditions in the Horn of Africa.

This brings to mind something that has received very little attention outside the world of environmental science, a 2002 paper co-written by Leon Rotstayn from CSIRO Marine and Atmospheric Research. ‘Tropical Rainfall Trends and the Indirect Aerosol Effect’ modelled the effects on rainfall of sulfate emissions and found that they helped account for decades of strikingly decreased rainfall observed in northeast Africa from the 1960s onwards, which led ultimately to the calamitous Sahel famine of the 1980s. Simulations revealed that precipitation in the low-latitude band between Senegal and Eritrea is very sensitive to changes to oceanic temperature gradients in the nearby Atlantic and Indian Ocean basins. Such marine temperature changes, and the human misery that resulted, may have been induced by the activity of coal-fired power stations, diesel vehicles and oil refineries in the northern hemisphere, which caused the latter to cool relative to the south. For now that is an open question.

Much interesting work has been done on the climate and water-cycle effects of atmospheric brown clouds over the Indian Ocean and East Asia formed by black-carbon and sulfate aerosols resulting from nearby and upwind industrial activity and household burning. These may include the recently observed moderate cooling in China and India, disruption of the summer monsoon, floods in southern China and drought in the north. Uncertainties, apparently, abound when estimating the precise effect on cloud formation, precipitation and radiative forcing of atmospheric aerosols. But, just like in Ethiopia and Sudan during the 1980s, what was previously attributed to overfarming and desertification can be simulated by other means, and may thus have another explanation.

The Ethopian-Sudanese famine of the 1980s was one of the great televised and photographed horrors of the late 20th century. The reality of these events and their use as a business opportunity has remained obscure, as Alex de Waal, formerly of Human Rights Watch, argued in his book on the ‘disaster relief industry’, Famine Crimes. De Waal’s detailed retelling of the famine, focused on its social and political underpinnings, ought to be supplemented with knowledge of research into its possible physical basis.

The alternative is the grotesque line of BBC foreign correspondents about ‘biblical famine’, the video below having become the template for all subsequent treatments of the subject.

An oldie but a goodie

July 7, 2011

Here – believe it or not – is a quite useful piece of work by Eric Posner and Adrian Vermeule. ‘Divide and Conquer’ does not contribute much new analytically. What the paper does provide is a lengthy discussion and taxonomy (from the authors’ impeccably Establishment perspective) of the ubiquity and variety of top-down divide-and-rule strategies adopted in various settings involving bargaining or strategic interaction (labour contracts, constitutional design, imperial rule and counterinsurgency, Great Power rivalry, oligopoly industries, etc.).

This observation, too, has been made many times (Kant listed divide et impera as one of his three political maxims of statecraft; Machiavelli described it in his Art of War). But for that reason, it seems, it is commonly regarded as being old hat, divide and rule having been superseded as a strategy by more advanced and subtle techniques. The leaders of ancient Rome and British India, it’s said, may consciously have plotted to pit the governed against each other; but in the present day things are not so crude. Or, if they are, we’ve known about it anyway for centuries so it’s not worth talking about anymore.

Not so, show Posner and Vermeule. Today divide and rule is a strategy applied in many different domains. It is not just part of the strategic toolkit. It is perhaps the key instrument, for which the wary must keep lookout in all its multiform Swiss-knife guises: nationalism, battle of the sexes, between-generations baiting, etc.

Since the 1970s many people – not just political scientists – have grown accustomed to discussing political matters using game-theoretic terms and formalism. Among these may be numbered the description of climate-change negotiations as a public-goods game; welfare-spending or fiscal-appropriation decisions as a bargaining game; and the forming of legislative coalitions as a Downsian contest for the median voter. Most of these, however, take the form of n-player coordination games in which the problem is how to induce cooperation between agents. But Posner and Vermeule show that often for rulers the goal instead is to foment fractious discord among the governed or subordinate.

Of course, many theoretical discussions, by people like John Roemer and Joshua Cohen, have acknowledged the contemporary role of divide-and-rule strategies. Orthodox economists working on principal-agent problems in labour contracts (i.e. how managers can best induce work effort through payment-reward) have agreed that payments which induce competition between workers are optimal. Conservative think tanks have forthrightly advocated use of divide-and-rule strategies as part of the military occupations of Iraq and Afghanistan.

But Posner and Vermeule provide a domain-general and simple discussion of the strategy and its use in various institutional circumstances. The following conditions, they explain, ‘are essential to any divide and conquer mechanism: (1) A unitary actor bargains with or competes against a set of multiple actors. (2) The unitary actor follows an intentional strategy of exploiting problems of coordination or collective action among the multiple actors.’ This is for Stag-Hunt games in which agents have two possible strategies (cooperate or defect) and in which there are two equilibria: (1) cooperate, cooperate and (2) defect, defect. The attention focuses on ‘the role of third parties who are not themselves players of these games but who will be harmed if the players cooperate’. The third party, whose payoff depends on the result of the ‘nested’ game, tries to prevent the other two players reaching the cooperative equilibrium. Note that what here are called ‘unitary actors’ may include organised collectives like armies, corporations and political parties, which are each subject to a unified command-and-control structure. (The term itself comes from the ‘realist’ school of international-relations theory.) Whole societies, made up of millions of component individuals with different objectives, cannot be understood as such. There is no such thing as a general will. The interaction of these multiple agents may be understood as the nested game.

As Posner and Vermeule describe it, the ‘unitary actor is, in essence, a first mover in the larger strategic environment. If cooperation appears likely, the unitary actor will attempt to create and exploit divisions between the game’s players’. This may take various forms: pitting one agent against the other through penalties, bribes or other incentives, sowing mistrust or degrading communication channels. In general, the ruler, who will be adversely affected if the subordinates cooperate with each other, aims at ‘splitting similar groups through dissimilar treatment’. In practical real-world situations, this first-mover advantage generally takes the form of control over rationed scarce resources (jobs, wages, health and education services, political influence), and their allocation to different groups.

A divide-and-rule strategy underlay James Madison’s argument, in Federalist No.10, for a large republic of gentlemen senators as the best constitutional form through which to avoid ‘faction’ (tyrannical rule by the majority interest) and thus preserve the rights of propertyholders:

[The] most common and durable source of factions has been the various and unequal distribution of property. Those who hold and those who are without property have ever formed distinct interests in society. Those who are creditors, and those who are debtors, fall under a like discrimination… The smaller the society, the fewer probably will be the distinct parties and interests composing it; the fewer the distinct parties and interests, the more frequently will a majority be found of the same party; and the smaller the number of individuals composing a majority, and the smaller the compass within which they are placed, the more easily will they concert and execute their plans of oppression. Extend the sphere, and you take in a greater variety of parties and interests; you make it less probable that a majority of the whole will have a common motive to invade the rights of other citizens; or if such a common motive exists, it will be more difficult for all who feel it to discover their own strength, and to act in unison with each other.

And in a contemporary letter to Thomas Jefferson:

If then there must be different interests and parties in Society; and a majority when united by a common interest or passion can not be restrained from oppressing the minority, what remedy can be found in a republican Government, where the majority must ultimately decide, but that of giving such an extent to its sphere, that no common interest or passion will be likely to unite a majority of the whole number in an unjust pursuit. In a large Society, the people are broken into so many interests and parties, that a common sentiment is less likely to be felt, and the requisite concert less likely to be formed, by a majority of the whole. The same security seems requisite for the civil as for the religious rights of individuals. If the same sect form a majority and have the power, other sects will be sure to be depressed. Divide et impera, the reprobated axiom of tyranny, is under certain qualifications, the only policy, by which a republic can be administered on just principles.

More recently, witness the dismemberment of supra-national political entities like Yugoslavia and the Soviet Union, and their division along ethnic lines, with various parts then brought under the NATO umbrella. Divide-and-rule strategies commonly work by narrowing the focus of loyalty and solidarity, or nurturing or encouraging attachments to nation, race or gender. In situations where a population shares language, behaviour, habits and traditions, it may successfully be divided by making most salient whatever single trait its members do not hold in common: religion, skin colour or some other phenotypic characteristic.

This is also how to understand the emergence during recent decades of particularism and ‘identity’ projects as political movements in the economically-developed countries. Each of these functions to partition people and weaken their political unification by upholding the political exclusivity of a group based on some characteristic, often enough inherited from birth, which reproduces a division. Of course, attention to such matters (e.g. racial prejudice or oppression) may have progressive potential to the extent that it overcomes segregation and leads to political unification on a wider basis than before. But such progressive potential can be depleted and continued focus become regressive.

As the historian Eric Hobsbawm has pointed out, a universalist movement aims to abolish the category that brings injustice and inequality (e.g. the division of people into classes via differential private ownership of scarce productive assets, in the case of socialism). No nationalism or identity project, on the other hand, aims to abolish nationhood or whatever is the relevant vehicle of identity. Particularism is thus a form of interest-group rent seeking that seeks to gain privileges for its constituent members: access to prestigious law schools, quotas for seats in parliament, favourable welfare payments, etc. As such it is acutely vulnerable to manipulation by ruling groups – which dangle inducements to ‘defect’ rather than ‘cooperate’ – of the sort described by Posner and Vermeule. It is known, for example, that elements of the US and Australian states sought deliberately to deal with 1960s radicalism by diverting it into more amenable nationalist channels. Pliable figures were cultivated and funded, feuds nurtured and groups played off against each other – it being important, J. Edgar Hoover remarked, ‘that Black extremist groups be kept divided so that their strength is not increased through united action.’ External powers, meanwhile, have long promoted secession or separatist movements in resource-rich and strategic regions, from the Kaiser’s posture as ‘protector of Muslims’ during the late Ottoman Empire, through the Biafran conflict, down to the Ogaden and Nubian peoples today. (Which is not to say that such groups aren’t sometimes or usually genuine victims of misfortune and repression). In this they have been assisted by the ‘progressive’ gloss applied to the principle of ‘self-determination’ through its wielding by Austro-Marxists and Stalinists along with Wilsonian internationalists and their latter-day epigones among the NGO and activist set.

These tactics are the bread and butter of security and intelligence organisations, diplomats and politicians. The propertied classes, meanwhile, may benefit from segmented labour markets (where due to scarcity or costly training for some jobs there is a dispersal of wage rates and other conditions of employment, with horizontal mobility of workers limited) and anti-immigrant xenophobia.

There are other groups, foremost among them the media, that assist divide-and-rule strategies without themselves sharing the incentives and immediate objectives of ruling groups. Both the ‘yellow’ and ‘quality’ press, for sound business reasons, revel in promoting, egging on and inventing lurid tales of social conflict, pitting one group – race, gender, generation – against another.

Strange days

July 2, 2011

(Just to forestall boredom, frustration or angry clicks of the ‘back’ button: Much of what follows will be familiar to regular readers of this blog. I’ve gone over similar ground once more only to set up the next post, which will look at a recent and far more interesting paper by Posner and Vermeule.)

In 2003, Eric Posner (University of Chicago) and Adrian Vermeule (now at Harvard) published a paper in the Stanford Law Review called ‘Accommodating Emergencies’.

It examined what the authors described as the proper ‘degree of deference’ payable to the executive branch during unusual circumstances, such as civil war or severe economic downturn.

As the title suggested, Posner and Vermeule cast an affirmative eye on relaxation or suspension of the Constitution during an emergency, or arrogation of executive power such as had recently occurred on the pretext of fighting global terrorism.

They placed themselves in a tradition they identified with Alan Dershowitz, Richard Posner and William Rehnquist, for whom extending the ‘boundaries of the politically possible’ was sometimes to be welcomed:

During an emergency, it is important that power be concentrated. Power should move up from the states to the federal government, and, within the federal government, from the legislature and the judiciary to the executive.

Constitutional rights should be relaxed, so the executive can move forcefully against the threat.

If dissent weakens resolve, then dissent should be curtailed. If domestic security is at risk, then intrusive searches should be tolerated.

There is no reason to think that the constitutional rights and powers appropriate for an emergency are the same as those that prevail during times of normalcy.

They went on to reject the claim, being advanced by some at the time, that ‘emergencies work like a ratchet’, as follows:

With every emergency, constitutional protections are reduced, and after the emergency is over, enhancement of constitutional powers is either maintained or not fully eliminated, so that the executive ends up with more power after the emergency than it had before the emergency. With each successive emergency, the executive’s power is ratcheted up.

Against this notion, Posner and Vermeule argued that there was no secular trend towards entrenched executive power; rather there was a cyclical movement of privileges gained and reversed. Or (arguing in the alternative), if there were such an accretion, it could be explained by ‘long-term technological and demographic changes, not recurrent emergencies’.

In a subsequent working paper called ‘Tyrannophobia’, the same two authors upbraided the US public’s ‘excessive fear of tyranny’. This ‘irrational… unnecessary and costly’ trait could best be explained by ‘cognitive biases and other psychological phenomena’: in short, ‘the broader paranoid style’.

Eight years on, Posner and Vermeule’s affirmative stance remains. They have just published a book welcoming the ‘effective end of the Madisonian republic of separated powers’ in favour of a ‘presidential democracy… in which Congress and the courts have been reduced to marginal actors, who carp from the sidelines but for the most part end up deferring to executive power.’

Judged solely on their descriptive accuracy, the authors can’t be faulted.

As they recognize, ‘few liberal commentators argue anymore that the President is abusing executive power.’ Indeed, today it is to unlikely sources like John Yoo that one must turn for criticism of Obama’s brazen disregard for legal constraints in Libya (which the President dismissed on Wednesday as ‘all kinds of noise about process and congressional consultation and so forth’). Scarcer still is opposition to expansion of the statutorily-sanctioned wars in Afghanistan and Iraq to Pakistan, Yemen and Somalia.

Indeed, District Courts have ruled, in support of Justice Department submissions, that the President and his military and national-security advisors (foremost among them the Defence Secretary and CIA chief) are the sole qualified interpreters of the geographical scope and temporal extent of the armed hostilities authorized by Congress in September 2001. It is, they have agreed, ‘inappropriate for a court… to adjudicate ex ante the permissible scope of particular tactical decisions that the Executive may take’ against al-Qaeda ‘affiliate organizations’ and their alleged members.

Should the President wish to assassinate a person, no public explanation need be adduced, let alone due process be afforded:

There are many aspects of military and national security operations in which the government does not publicly disclose the criteria that guide its actions, but that hardly means that in all such operations the government acts “arbitrarily.”

The President has a constitutional duty to take care that the law is faithfully executed, and he and the other defendants here take that obligation very seriously, endeavoring at all points to comply with all applicable domestic and international laws. The laws themselves are not secret.

And apart from those laws, the alleged operations here [‘lethal action’ against a US citizen in Yemen] would be guided by fact-intensive military and intelligence determinations involving command and policy judgments in the context of highly context-specific diplomatic and logistical considerations.

Any effort to reduce those judgments to a set of “criteria” to be publicly announced in response to a judicial injunction and subsequently enforced would, for the reasons previously discussed, exceed the bounds of judicial authority.

To gradual retrenchment of Fifth, Sixth and Eighth Amendment rights (a rollback still more advanced in jurisdictions, such as Australia and its states, where these are not explicitly granted or specified), must be added the ongoing evisceration of Fourth Amendment protections.

Viewed broadly, it must be agreed that there now are fewer institutional or procedural constraints on executive power (whether judicial review of administrative action, or reliance on legislative consent or oversight).

Posner and Vermeule write of the ‘ever-diminishing institutional capacities’ of the legislative and judicial branches relative to those of the executive, which has accrued ‘sweeping statutory and constitutional powers’. But, they aver, this has only ‘strengthened informal political checks on presidential action. The result is a president who enjoys sweeping de jure authority, but who is constrained de facto by the reaction of a highly educated and politically involved elite, and by mass opinion.’

These ‘non-legal constraints’, as they describe them, are ‘amorphous and vague’, and Posner and Vermeule hardly bother to substantiate or specify them at all:

The modern economy, whose complexity creates the demand for [unchecked] administrative governance, also creates wealth, leisure, education and broad political information, all of which strengthen democracy and make a collapse into authoritarian rule nearly impossible… Every action is scrutinized, leaks from executive officials come in a torrent, journalists are professionally hostile, and potential abuses are quickly brought to light. The modern presidency is a fishbowl, in large part because the costs of acquiring political information have fallen steadily in the modern economy, and because a wealthy, educated and leisured population has the time to monitor presidential action… The administrative state has thus helped to create a wealthy, educated population and a super-educated elite whose members have the leisure and affluence to care about matters such as civil liberties, who are politically engaged to a fault, and who help to check executive abuses.

Quite how this nosy busybody public, shorn of institutional levers by which to act upon what it ‘care[s] about’, monitors and ‘[brings] to light’, is supposed to apply its iron shackles, is anyone’s guess. The opinion poll, perhaps? Periodic elections? Indeed, Posner and Vermeule describe a President ‘substantially constrained by the ambient force of mass public opinion and the implicit threat of political backlash…he is enslaved to the opinion polls.’

The figurative language betrays its remoteness from reality; the description cannot but provoke mirth. A populace, in the short-run constantly misinformed in the most blatant of ways, in the longer term kept ignorant and poorly educated, and allowed every few years to participate in voting rituals during which mass support is brigaded behind elite policy aims  and to the extent it is not is rendered basically meaningless and ineffectual  never forms anything but a weak and pliable external constraint on the activity of administrators, upper-level bureaucrats and ministers.

The Posner-Vermeule ‘electoral democracy’ argument is a flimsy piece of apologetics, and hard to square with the growing and explicit contempt for popular opinion and voting rights held by Federalist Society confrères such as Justice Scalia, leading to his approval for disenfranchisement and blatant vote rigging. (Thus, in December 2000, John Yoo wrote in the Wall Street Journal that ‘the people have no right to vote for president or even the Electoral College; that power is only delegated to them by the grace of the legislature. In appointing the electors itself, the legislature would be directly taking up its constitutional functions again.’)

But it does oddly resemble a current favourite theme of the left-liberal intelligentsia (journalists, academics, artists). Many of the latter are eager to criticize governmental misdeeds or arbitrary excercise of power (non-judicial detention of refugees, denial of basic rights like abortion etc.).

But the atrophied and weak non-executive organs of state cannot plausibly be blamed for this, and the critics are not willing to call the entire system into question by asking why, at this point in history, the executive branch is being given unchecked authority, and why it might try to divide the population along racial or gender lines. So everything is explained by the fearful obeisance of a political elite which, for electoral reasons, submits (‘panders’) to the wishes of a backwards, uncouth, racist populace.

Both positions, it’s clear, share contempt for broad masses of the population, and solidarity with higher echelons of the state elite.

As Posner and Vermeule make plain in their ‘Accommodating Emergencies’ paper, some of the recent changes to the distribution of power between government branches, and the withdrawal of long-standing checks and rights, are intended to facilitate Washington’s pursuit of its strategic objectives (i.e. the waging of aggressive war).

Others, more clearly, such as detention without trial, are designed to allow persecution of dissenters. This points to the more fundamental purpose, shared across most economically advanced countries: to re-shape the state to suit new circumstances, in which political stability rests on unprecedently narrow social foundations.

That unchecked rule by the executive means government on behalf of a tiny group of rentiers can be seen by the anti-democratic ‘changes of governance’ envisaged for the EU, suggested by European Central Bank chief Jean-Claude Trichet upon receiving the Karlspreis in Charlemagne’s palace at Aachen:

It is of paramount importance that [fiscal] adjustment occurs; that countries – governments and opposition – unite behind the effort; and that contributing countries survey with great care the implementation of the programme.

But if a country is still not delivering, I think all would agree that the second stage has to be different.

Would it go too far if we envisaged, at this second stage, giving euro area authorities a much deeper and authoritative say in the formation of the country’s economic policies if these go harmfully astray? A direct influence, well over and above the reinforced surveillance that is presently envisaged?

The rationale for this approach would be to find a balance between the independence of countries and the interdependence of their actions, especially in exceptional circumstances.

We can see before our eyes that membership of the EU, and even more so of EMU, introduces a new understanding in the way sovereignty is exerted. Interdependence means that countries de facto do not have complete internal authority. They can experience crises caused entirely by the unsound economic policies of others.

With a new concept of a second stage, we would change drastically the present governance based upon the dialectics of surveillance, recommendations and sanctions.

In the present concept, all the decisions remain in the hands of the country concerned, even if the recommendations are not applied, and even if this attitude triggers major difficulties for other member countries.

In the new concept, it would be not only possible, but in some cases compulsory, in a second stage for the European authorities –  namely the Council on the basis of a proposal by the Commission, in liaison with the ECB – to take themselves decisions applicable in the economy concerned.

One way this could be imagined is for European authorities to have the right to veto some national economic policy decisions. The remit could include in particular major fiscal spending items and elements essential for the country’s competitiveness.