Archive for January, 2013

Doubt as a free good; or, ‘Product defence’ as an externality

January 28, 2013

The previous post considered advice, courtesy of Joe Biden, that videogame firms should try to ‘improve their public image’, presently mottled by various ‘kinds of evidence linking video games to aggression.’

Impressions, it seems, are everything: videogames firms don’t ‘necessarily need to change anything they’re doing,’ but must instead focus on ‘how they’re perceived by the public’.

This need for decorative gestures comes ‘irrespective of the “truth” of the violence/media debates’, says a vociferous pro-games participant in the latter. Not action but legitimation is called for.

Such PR needs do arise from time to time.

The owners and managers of a business enterprise naturally want to preserve their full dominion over its assets, and the prerogatives (and cash flow) that follow from it.

Thus firms regularly are obliged to undertake defence of a product or activity that, while profitable, also poses a risk or hazard to consumers, employees, the environment, the assets of other firms, etc.

Restrictions on the prerogatives of ownership include many types of government regulation: quality standards, labelling laws, health and sanitation laws, zoning ordinances or land-use restrictions that limit where commercial and industrial structures may be built, commercial licences that control who and where people may operate businesses, minimum-wage laws, anti-discrimination laws, pollution control and monitoring by environmental protection agencies, occupational safety and health regulations, taxation or eminent domain, and establishment of civil remedies.

In ordinary circumstances, it must be said, any hazardous byproducts (negative externalities or ‘market failures’) arising from economic activity, while of course regrettable, are hardly prohibitive.

Both tort law and government regulation aspire to an ‘efficiency standard’, balancing the costs arising from some commercial activity or product against its benefits.

Broadly speaking, if the increment in profits outweighs the decrement in human lives or environmental amenity, according to some arithmetic, the tradeoff is deemed ‘worth it’.

But, in rare circumstances, official opinion may decide that the troublesome product or activity imposes excessive or intolerable burdens upon the state (e.g. medical costs, political instability), upon other special interests (e.g. insurance providers) or upon a powerful and broad social constituency (e.g. the propertied classes as a whole, through higher wage bills or loss of legitimacy for existing social institutions).

In such cases, particular business interests may be sacrificed for the ‘greater good.’ The state may impose regulations limiting the full exercise of property rights, restricting what the offending owners may do with their assets or how their enterprises operate.

Thus the need for corporate ‘product defence’ campaigns.

These are deployed, permanently in some industries, to dispel alarm and forestall the threat of damaged business interests from lower sales revenue, product liability claims, government regulation or outright prohibition.

Cigarette manufacturers, oil corporations, etc. have notoriously employed, or engaged as independent contractors, teams of professional Panglossians.

These ‘merchants of doubt’, co-opted or career, were set up in well-apportioned Potemkin institutions for phony research. Their task was ‘establishing a controversy at the public level’, where no such equivocation existed at the level of peer-reviewed science.

Meanwhile economists like Kip Viscusi provided ad hoc intellectual warrants and boosterism.

Viscusi argued that the addictiveness of cigarettes, as measured by smokers’ responses to rising prices, was comparable to ‘consumer products that people generally do not consider addictive, such as theater and opera, legal services, and barber shops and beauty parlors.’

And anyway, he added, premature deaths caused by smoking save the government the cost of pensions and nursing homes.

Videogame firms face a similar need to defend their product against the risk of regulation, damaging criticism, penalty or suppression.

Duke University economist James T. Hamilton has asserted that, ‘at its core’, media violence ‘is a problem of pollution.’

This is because ‘programmers and advertisers may not take into account the full costs to society of the show they schedule or support.’ Such costs include the desensitization, increased aggression and fear experienced by audiences, particularly children.

So defined, and according to the conventional prescription of ‘public policy’ experts, this means that the remedies for media violence must be similar to the solutions for environmental pollution: zoning (e.g. for broadcast TV, ‘shifting violent programs to times when children are less likely to be in the audience’) or taxation.

Thus several jurisdictions, including the state of California, have attempted to prohibit the sale of violent video games to minors.

Entertainment Software Association - lobbying spending Q3 2012

But the response by videogames firms has been different from that followed by cigarette manufacturers and oil corporations.

Certain features of the product itself and the market for video games, as described below, make it less necessary for firms to directly fund ‘product defence’ by bought-and-paid-for researchers and centrally directed think tanks (which these firms nonetheless do finance).

For several reasons, which are outlined below, the advocacy service is already provided at close to zero expense  by ideologists, consumers, other segments of the mass-communications media and academics.

The latter constitute, I will argue, a decentralized ‘epistemic community’ of like-minded people and linked institutions. Shared incentives (and self-conscious group identity) motivate them to adopt similar beliefs about the harmlessness of violent video games, ignoring (for both psychological and commercial reasons) available information that disconfirms such beliefs.

But the first reason can be dealt with briefly, since it is least relevant to my point in this post.

Any statement regarding the harmfulness of video games products can simply be trumped (in the US) by brandishing the First Amendment, thereby activating the professional guild values of journalists and academics.

A seemingly dispositive argument can be made that commercial videogames are constitutionally-protected speech, including when addressed to minors and involving extreme violence. Thus their sale is immune from restriction or impediment, ‘even where protection of children is the object’ (Antonin Scalia).

This line has been advanced by the Cato Institute, the corporate mega-lobby ALEC, the industry-funded think tank The Media Institute, the Electronic Frontier Foundation and the Progress and Freedom Organization.

In 2011 it was supported 7-2 by the Supreme Court in Brown v Entertainment Merchants Association, striking down the Californian statute.

Since ‘there is no exception for violence’, voluntary ‘self-regulation’ by the industry and ‘parental empowerment’ are the only responses available to ‘what some people think is offensive’ (legal counsel for Michael Gallagher, president of the Entertainment Software Association).

If so desired, the syllogism may be extended to a broader claim: any critical scrutiny of a ‘creative’ product violates the First Amendment rights of its maker.

A recent example appears in the breathtakingly disingenuous statement issued by a Sony Entertainment spokeswoman, in response to criticism from within Hollywood of Kathryn Bigelow’s Zero Dark Thirty: ‘The film should be judged free of partisanship. To punish an artist’s right of expression is abhorrent. This community, more than any other, should know how reprehensible that is.’

A second feature of video games is much more important in explaining why the industry’s PR defence occurs, in large part, without the involvement of centrally organized or directly paid agents.

Like with many media, information and entertainment products, the market for video games exhibits what economists call ‘network externalities’.

When this feature is present, the value or benefit of a product is increasing in its popularity or number of users. Additional users make the product more valuable or appealing (i.e. increase the willingness of buyers to purchase it at the going price).

Sellers duly profit from this cascade or bandwagon effect.

adopters to a network

With videogame consoles and other platforms, an increase in the number of one type of user (customers or game players) increases the number of another type of user (content providers or game developers).

A pair of mainstream economists explain how this works:

Buyers of videogame consoles want games to play on; game developers pick platforms that are or will be popular among gamers…

Videogame platforms, such as Nintendo, Sega, Sony Play Station, and Microsoft X-Box, need to attract gamers in order to convince game developers to design or port games to their platform, and need games in order to induce gamers to buy and use their videogame console. Software producers court both users and application developers, client and server sides, or readers and writers. Portals, TV networks and newspapers compete for advertisers as well as “eyeballs”. And payment card systems need to attract both merchants and cardholders.

The console firms design and manufacture hardware, then contract out to independent game developers to provide games for the platform (as well as producing their own in-house titles). They may finance the developer’s large fixed costs.

The independent developer pays a fixed fee to the console maker for use of proprietary software development tools (the ‘devkit’), then also pays a per-unit licensing royalty on sales. These IP royalties, a form of rent, are the principle source of profit for the console producers and ‘publishers’.

Two-sided markets with network externalities

More crucially, like many segments of the media industry, video games exhibit indirect and ‘cross-market’ network externalities.

This means that the value of other products is increasing in the popularity of video games. These complementary products find their usefulness to buyers is enhanced as video games themselves have more buyers.

For example, growth in the number of users of particular software increases the attactiveness of a complementary component, console or other hardware  such as an HD TV, a speedy Internet connection or PC, a new handheld device and so on.

There are spillovers across markets: the more buyers a game has, the more attractive becomes brand or merchandise tie-ins, the more advertising and games journalism can occur, the more likely becomes permission to use proprietary material (music and film) in return for per-unit royalty fees, and so on.

Sony famously tried to increase demand for its Blu-ray discs, and its revenues as a movie studio, by bundling a Blu-ray player into its Playstation 3 console.

Entertainment Software Association president Mike Gallagher described this feature of video games in a speech to the Institute for Policy Innovation, a right-wing think tank:

[As] gamers know – and economists have confirmed — the demand for great video and computer game experiences also drives sales of complementary products and services, such as for broadband and high-definition TVs. Our industry stimulates complementary product purchases of roughly $6.1 billion a year in the U.S. alone. These purchases are also spread around to businesses large and small.

Network externalities mean that the greater the number of consumers purchasing and using video games, the larger is demand in several other distinct markets.

This includes others not mentioned by Gallagher, among them the mass communications media, advertising, journalism and other opinion-making fields. All may experience mutually increasing demand for their product as the number of people adopting and playing video games grows.

This translates into material rewards and personal advantage: higher profits (or rents) for owners and higher earnings (or other labour-market success) for employees in these complementary markets.

Along with games consumers themselves, these providers of complementary products (whose returns increase with the usage of video games) therefore have incentives to provide the games industry with ‘product defence’, flattery and boosterism. Thus they can be found disseminating cheerful claims that violent video games are neither a public-health threat nor morally objectionable.

Success really does provide its own justification. Self-conscious individual corruption is not necessary. Motivated belief formation (‘wishful thinking’, dissonance reduction or effort justification) is sufficient to persuade most people that whatever brings them rewards and a livelihood can’t be altogether bad.

The familiar dynamics of belief transmission in tightly clustered social networks then apply, with epistemic contagion ensuring that all members share credence in the safety of violent video games.

As explained by two rapt economic theorists  among the chief academic ideologues of our postmodern infoculture  the dynamics of network effects (which also support so-called consumer ‘subcultures’) are also those of conformity and herd behaviour.

Increasing returns in the market for video games (and thence for related products) provide a scaffold for the propagation of beliefs about the soundness of the product.

In other words, there is no need for video-games firms to follow the example of tobacco firms. The latter had to seek out Reader’s Digest and persuade Edward R. Murrow to cease the damaging coverage of their product. In the presence of strategic complementarity, however, good press and favourable PR take care of themselves.

Ultimately the video games industry is tied to other sections of the media, information and entertainment industry  by direct threads of ownership, credit, cross-subsidy, and labour-market adjacency  in ways that did not apply for Philip Morris or Exxon.

Thanks to this relationship, there is a standing army of journalists, bloggers and opinion-makers who will reliably leap to the defence of games without needing to be bamboozled or force-fed talking points.

(See the scornful online article in Condé Nast publication Vanity Fair about Biden’s meeting: ‘Didn’t Tipper Gore resolve the “violent video games” issue shortly after she heard Prince for the first time, in 1985, and insisted on warning labels on CDs and game packaging? Apparently not.’)

Of course, it is true that tabloid TV programs, newspapers and talk-radio presenters do periodically suggest  usually following some mass shooting  that violent video games may have deleterious effects on their users or on society.

But so do they regularly rail against the greed of banks and the venality and corruption of politicians.

This never seriously threatens the continued existence or positions of the latter, any more than the commercial survival of a profitable branch of the entertainment industry is endangered by the feeble, short-lived denunciations of ‘old media’ commentators.  (Such critical beliefs about e.g. banks, which find no outlet in the electoral system or within reach of any available levers of popular influence, are allowed only inchoate and limited expression. They may thereby be channelled into such useful directions as racism, scapegoating, etc., or  leveraged for authoritarian or reactionary purposes, or deliberately stoked by one powerful group to win bargaining power over another.)

Most ‘anti-games’ media commentators, of course, are employed or paid  by a firm that itself is a subsidiary of some conglomerate or holding company (Vivendi, Viacom, Disney, Time Warner, etc.) that also owns firms publishing, developing, marketing or distributing video games.

Traders in the language of ‘old media’ and ‘new media’ take their generational framework quite literally, as though novel industries within the consumer-entertainment sector must inevitably compete with and displace traditional and existing ones, much as each human generation must physically supplant that which it succeeds.

Thus the argument from ‘moral panic’, or ‘technopanic’ as Cato’s Adam Thierer would have it. (Thierer is former president of the Progress and Freedom Foundation, a ‘market-oriented think tank that studies the impact of the digital revolution’, funded by the Entertainment Software Association and the world’s largest media corporations. He now is at George Mason’s Mercatus Center.)

As I’ve shown, remonstrating against ‘moral panic’ has been deployed to great effect by Christopher Ferguson and others to deter all criticism of violent video games.

The claim presented here (packaged in the language of 1970s ‘left-wing’ sociology) is that ‘old media’ entities are fogeyish cultural ‘authorities’ seeking to preserve their privileges. They are resistant to novelty, such as is found in ‘new media’ products like video games.

This argument is calculated to push all sorts of buttons and win a broad, ramified constituency.

The ‘knowledge economy’ rhetoric is chosen to win the allegiance of a self-identified ‘creative class’, which looks favourably upon new forms of entertainment, information and communications technology. Borrowing from the sociology of deviance, meanwhile, aims to attract ‘progressives’ who sympathize with the marginalized.

Onto this is grafted the the intellectually fashionable idea of ‘belief contagion’, fear cascades and popular risk hysteria, developed by Cass Sunstein.

The result is a neat contrarian package, unassailable by anyone who considers themselves to be ‘sophisticated.’


But there is no reason games can’t merely supplement existing media, and became part of the asset portfolio of existing media giants (Activision, for example, is now a subsidiary of Vivendi, having been started during the 1970s as an independent company by disgruntled Atari games developers).

Indeed, due to the high fixed costs and low marginal costs involved in digital production and distribution, it seems inevitable that the sector should exhibit economies of scale and thus create barriers to entry. Its surviving firms are destined to become subsidiaries of (or to go on licensing intellectual property from) some conglomerate or holding company.

development costs

Few avowedly ‘progressive’ people have sympathy for such corporate media behemoths as Sony, Microsoft, etc. They may however be induced, by what Thomas Frank called ‘market populism’, to express enthusiasm for the venture-capital driven world of games.

In this sector, small and scrappy developers and start-up companies (and later small- and medium-sized enterprises) have few assets and thus are credit constrained.

These firms therefore rely on private equity finance from Silicon Valley. As shown above, they also feed money into the pockets of those large companies (oligonomies, to use Steve Hannaford’s term) that own the platform for independent content producers and the distribution system for customers (Apple’s iTunes, Amazon, Netflix, Rhapsody, etc., and the three big games-console makers, Sony, Nintendo and Microsoft), as well as to those that aggregate and allow mining of massive data sets to build fortunes from advertising brokerage (Google, Facebook).

This leads on to the third reason why the video games industry has not needed to rely upon centrally organized ‘merchants of doubt’, nor ‘astroturf’ through paid agents, to defend their products.

The network externalities of the videogames industry reach all the way to academia, thanks especially to the contemporary commercialization of the university.

Consider the remark made by Georgia Tech professor Ian Bogost in The Atlantic about the visit by games industry CEOs to the White House for Biden’s meeting:

[Public] opinion has been infected with the idea that video games have some predominant and necessary relationship to gun violence, rather than being a diverse and robust mass medium that is used for many different purposes, from leisure to exercise to business to education… Truly, we cannot win.

‘We’, says the faculty member of a state university about ‘my colleagues in the games industry’.

There now are many humanities and social science scholars, faced with shrinking faculty budgets, stingy hiring policies and poor tenure prospects, who in desperation have hitched their wagon to a rapidly growing segment of the entertainment industry.

These academics have perceived a confluence of fortunes: as the games industry goes, so do they. As Bogost says, they naturally seek to acquire ‘cultural legitimacy’ for their medium.

An acknowledgement of video games’ good standing  as a respectable non-hazardous part of the culture, a ‘diverse and robust mass medium’, worthy of journal articles and monographs  is needed if these ambitious academics are to succeed in capturing a permanent seat at the table (perhaps fusing with cinema studies or even sitting alongside it as rough equals).

Therefore many of these scholars are obliged to defend violent games and to furnish the desired ‘no proof of harm’ arguments, come what may.

(Consider Texas A&M psychologist Chrisopher’s Ferguson’s comical attempt to argue against the well-supported hypothesis that violent games desensitize users to violence. The recently published study involved his student participants watching an episode of the programs Law and Order: SVU, Bones or Once Upon a Time, then failing to self-report reduced empathy when subsequently shown violent footage. Here we can add a corollary to the argument about the unique situation faced by defenders of video games in the academy. On average, peer review forms less of a barrier to publishing worthless or spurious results in social science or humanities journals than in the natural sciences.)

Many videogames scholars are precariously ensconced in academia: lowly adjuncts who receive no White House invitations. They are obliged to supplement their teaching income through paid work linked to the games industry (e.g. promotion, development, journalism).

Others, with stabler positions and savings to play with, can risk starting up their own firms (Bogost is one, though it seems unlikely to me that his above use of the collective first-person pronoun referred to this).

Such varieties of dependence and forms of extramural interaction create a commonality of both personnel and interests, tying the commercial success of a product to the scholarly work based on it. This increases feelings of affiliation. This sense of shared fate is not mistaken, and leads to unabashed scholarly apologetics for video games.

Amid the laudation, scope exists for some academics to engage in ‘criticism’ of certain aspects of the videogames industry and its products. But such reproaches are only of the nourishing, tough-love type that ultimately has the industry’s welfare at heart. Bogost’s encomium captures the general tone.

For all these reasons, there seems little requirement for videogames firms to orchestrate a subterranean ‘product defence’ by funding dedicated merchants of doubt. There are plenty of respectable and motivated people who perform the cheerleading task already as a sideline to their day job.

Now comes the final and perhaps most crucial reason why videogames firms have not had to spend more on paid agents and front groups to undertake a political defence of violent games (though expenditure on ‘government relations’ professionals is indeed enormous: the ESA typically spends more than $1 million per quarter on K Street lobbyists).

The US state leadership is committed to the promotion of militarism and violence.

Therefore the fact that members of the country’s population are presented with large doses of realistically depicted violence as ‘entertainment’  thereby being brutalized from early childhood  must prompt little concern, and provoke some pleasure, in ruling circles.

In a submission to a US Senate committee investigating violent media products, Thierer wrote:

Many people — including many children — clearly have a desire to see depictions of violence… Could it be the case, then, that violent entertainment — including violent video games — actually might have some beneficial effects? From the Bible to Beowulf to Batman, depictions of violence have been used not only to teach lessons, but also to allow people — including children — to engage in sort of escapism that can have a therapeutic effect on the human psyche. It was probably Aristotle who first suggested that violently themed entertainment might have such a cathartic effect on humans…

One might just as easily apply this thinking to many of the most popular video games children play today, including those with violent overtones…

This echoes Judge Posner’s opinion in the Kendrick case that: ‘To shield children right up to the age of 18 from exposure to violent descriptions and images would not only be quixotic, but deforming; it would leave them unequipped to cope with the world as we know it.’

In what Thierer called a ‘blistering tour-de-force’, Posner ‘[explained] how exposure to violently-themed media helps to gradually assimilate us into the realities of the world around us.’

But what did the eminent Posner mean by the ‘world as we know it’?

His 2001 judgement (on an Indianapolis ordinance banning ‘gratuitously violent’ games in arcades) had gone on curiously:

Now that eighteen-year-olds have the right to vote, it is obvious that they must be allowed the freedom to form their political views on the basis of uncensored speech before they turn eighteen, so that their minds are not a blank when they first exercise the franchise… People are unlikely to become well-functioning, independent-minded adults and responsible citizens if they are raised in an intellectual bubble.

So Posner’s defence of hyper-violent video games was that they mould the political views of children, making of them responsibile citizens ready to exercise political judgement. Lacking such inputs they would, apparently, make unreliable voters.

What callowness did games erase: what reality were children being prepared for?

Some idea may come from another submission to the same Senate commitee hearing on violent games, this one  by David Horowitz, director of the industry lobby group the Media Coalition.

Horowitz put it thus:

The impossibility of distinguishing “acceptable” from “unacceptable” violence is a fundamental problem with government regulation in this area. The evening news is filled with images of real violence in Iraq and Afghanistan routinely perpetrated by the “bad” guys. Often this horrific violence goes unpunished. It would be virtually impossible for the government to create a definition that would allow “acceptable” violence but would restrict “unacceptable” violence.


Muddying the waters

January 23, 2013

A fortnight ago, in what the US vice president billed as a commensal experience, Joe Biden, Eric Holder and Katherine Sebelius hosted senior executives from Activision Blizzard, Electronic Arts and other videogames firms at the White House.

The chief of the Entertainment Software Association, the industry lobby group for firms such as Microsoft, Sony and Nintendo, also attended along with several ‘independent researchers’.

This event was part of the Obama administration’s ‘gun violence task force’, a work of lustration designed to provide uplift after the Connecticut elementary school massacre.

Biden had previously mingled with representatives from the Motion Picture Association of America, the National Association of Broadcasters, Comcast, Wal-Mart Stores and the NRA.

Texas A&M associate professor Christopher Ferguson, whose work on video games I’ve discussed here and here, was apparently one of the ‘independent researchers’ to attend Biden’s meeting with videogames industry CEOs.

Ferguson later described Biden’s remarks to his assembled visitors:

His message, to a large degree, was, ‘Whether or not there are any kinds of evidence linking video games to aggression, what are things the industry could do to improve its image?’…

As much as anything, he seemed to be encouraging them to think about their public image, irrespective of the ‘truth’ of the violence/media debates. I don’t know if they were quite there yet, I think they were trying to emphasize that they are not part of the problem, which is understandable, whereas VP Biden was trying to emphasize that even if they are not part of the problem they could be part of the solution…

I think he was inviting the industry to consider basically ways that it could improve its image among non-gamers.

Ferguson said that ‘Biden encouraged the video game industry to consider ways of better educating the public.’ Biden was quoted as saying: ‘I come to this meeting with no judgment. You all know the judgment other people have made.’

According to the Wall Street Journal:

Ferguson said that today’s conference showed him that the game industry doesn’t ‘necessarily need to change anything they’re doing,’ but instead focus on ‘how they’re perceived by the public.’

‘What the industry needs to do is take the Vice President’s advice and really think about: what are some positive things that the industry can do? Public education campaigns about the ERSB [the self-regulatory Entertainment Software Rating Board] rating systems, trying to avoid some blatant missteps like having a gun manufacturer as part of their website, that kind of stuff,’ Ferguson said, referring to a controversial campaign in which Electronic Arts embedded links to weapons manufacturers’ products in the promotional website for its military shooter “Medal of Honor: Warfighter.”

The key participants in this charade, one can surmise, would rather the public have been spared details of everything but Biden’s puffy platitudes (‘An incident that I think we can all agree sort of shocked the conscience of the American people’, ‘There are no silver bullets’, etc.).

A senior elected official advising corporate executives on how better to manipulate the populace to advance the commercial interests of their privately-owned firms – while no doubt a common occurrence – is not a spectacle intended for transmission to a mass audience via media outlets.

So we have reason to be grateful for the candid post-meeting deposition by the media-friendly Ferguson. (He also was given a platform in Time magazine to defend violent games following last December’s mass shooting. Unsurprisingly, he emitted the conventional wisdom on the topic: Gun control + mental health services! His earlier analysis of the Batman cinema massacre in Colorado: ‘[If] it wasn’t Batman it would be something else… Trying to make sense of it is pointless.’)

Not for Ferguson, it seems, the giddy engouement usually inspired in intellectuals by proximity to wealth and power. Could such intimacies with Top People, of a lesser sort, have become familiar?

Ferguson observed that visiting Biden at the White House had made videogames firms appear ‘helpful’, and declared he was ‘cautiously optimistic’.

Indeed, received welcomingly or not, the advice given by the politician and the academic – that the industry should ‘improve its image’ by ‘educating the public’, ‘irrespective of the truth’ – was astute.

How might such a strategy be implemented?

The recent history of corporate PR campaigns – marshalled in defence of a maligned or hazardous product, and deployed to forestall the threat of lower sales revenue, product liability claims, government regulation or outright prohibition – provides the videogames industry with a successful template for muddying the waters.

There exist dedicated consulting firms that specialize in ‘product defence’ and ‘litigation support’, including the Weinberg GroupChemRisk and Exponent.

The work of these firms is nowadays studied under the name of agnotology. It usually involves suggesting that ‘debate’ or ‘controversy’  exists within a scholarly discipline or research community when in fact there is little or none.

‘Manufacturing uncertainty’ may be done by funding or promoting masses of research (legitimate as well as illegitimate, peer-reviewed alongside hackwork), at dedicated think tanks as well as independent academic institutions. Then it is pumped it into the mass media to create an apparent diversity of ‘expert’ opinion.

Cacophony leads to doubt among the lay populace over the true state of scientific knowledge, and thus reduces credence regarding their own inferences: ‘results are inconclusive’, ‘the jury is still out’, ‘the science is unsettled’, etc.

The locus classicus of these ‘epistemic filibuster’ techniques is from December 1953. Then, the CEOs of Philip Morris, Benson and Hedges, American Tobacco and US Tobacco met at the Plaza Hotel in New York, following the publication of research on the carninogenic effects of cigarettes.

There the tobacco executives contracted the PR firm Hill and Knowlton. Hill and Knowlton quickly recommended a strategy:

The underlying purpose of any activity at this stage should be reassurance of the public through wider communication of facts to the public. It is important that the public recognize the existence of weighty scientific views which hold there is no proof that cigarette smoking is a cause of cancer.

The PR firm advised the cigarette manufacturing firms to establish a Tobacco Industry Committee for Public Information. It would promote ‘general awareness of the big IF issues involved’ with the aim of ‘establishing a controversy at the public level.’

Equivocation and doubt about the validity of scientific evidence was created by recruiting well-credentialled scholars.

Since at this time ‘the case against tobacco was far from proven’, these consulting scholars would minutely examine the conduct of epidemiological and animal studies, question the precise shape of the dose-response curve relating exposure to ill effects, highlight ignorance or uncertainty about the specific causal mechanism involved, point to latency in response patterns, and sift through meta-analyses searching for gaps, errors or possible confounds.

The resulting ‘strong body of scientific data or opinion in defense of the product’ helped cigarette manufacturing firms to successfully defend themselves against tort claims for many decades (note, however, that this was not due to a duped public: many ‘landmark’ jury findings that awarded damages for product liability were overturned by appellate judges).

doubt is our product

These same obfuscatory procedures were subsequently used to delay recognition of the existence or harmful effects of toxic waste, the role of CFCs in ozone layer depletion, global warming caused by GHG emissions, asbestos, the nuclear-winter scenario and DDT.

The doubt-mongering agnotological template is followed expertly by the following article on the videogames website Kotaku, a Gawker Media blog.

The article purports to inform readers of the up-to-the-minute scholarly state of play (‘everything we know today’) concerning the psychological effects of violent video games:

‘[The] question of whether violent video games lead to aggression has been hotly debated’; ‘Some scientists have concluded that…’; ‘Others argue that…’; ‘It’s a debate that has been going on for over 25 years. And it shows no signs of stopping’; ‘video game violence has been criticized and scrutinized for decades now. You’ve probably heard the theories, maybe even voiced them… For gamers, this is all tired ground’; ‘On one side of the argument are…’; ‘Then there’s the other side of the argument, supported by… The evidence, this camp says, just isn’t conclusive’; ‘So scientists are divided, to say the least’; ‘Can we really link verbal or physical abuse to a test that seems so strange? It’s measures like this—and really, the ambiguity of “aggression” as a psychological concept—that have made professors like Chris Ferguson skeptical of today’s research, even when the evidence seems conclusive; ‘You don’t need a doctorate to know that the human brain is a complex machine, and that nothing about our behavior is predictable. There’s nothing exact about social science’; ‘Whether you believe that the link between violent video games and aggression is clear or you think the science is too faulty to mean anything—and there are strong cases on both sides…’; ‘So maybe the data speaks for itself: maybe there is a clear link between video games and aggression’; ‘Or maybe Chris Ferguson is right, and today’s research is too inconclusive to determine any causal links. It certainly can’t hurt to be more skeptical about what you see in the media.’

Thus, with perfect symmetry, does a lay audience encounter both sides of the story.

Readers learn of Ferguson’s queries about the relevance of standard psychological experimental techniques, such as word-completion tasks and Stroop tasks. They read his scepticism about the usefulness of such methods for detecting the priming effects of exposure to a presented stimulus (e.g. aggressive thoughts and feelings provoked by playing a violent video game).

They are told about possible confounds and other methodological qualms. They witness Ferguson shuttling between accusations that no media-violence effect exists, and admissions that any effect must, at any rate, be of negligible magnitude or, at least, ‘rather weak’.

Is the average reader of Kotaku equipped to judge this for what it is? Or does he or she instead perceive it as an arcane intra-disciplinary ‘debate’ between colleagues, unresolved and still in progress, with ‘both sides’ worthy of a hearing?

All of this is familiar to the historian of agnotology and ‘product defence’.

But today’s videogame firms have several advantages that their predecessors in other industries lacked.

Due to these advantages, Nintendo, Microsoft and Sony may not find it necessary or expedient to build Potemkin research institutions or fund bogus research by dedicated ‘merchants of doubt’.

In particular, due to the presence of what economists call ‘network externalities’, consumers of video games and providers of complementary products (including firms producing other entertainment and media goods, as well as journalists and even academics) already find it worthwhile to provide the videogames industry with ‘product defence’.

The latter comes free of charge and without needing to be organized directly.

While the market (and the commercialized university) provides a PR service in this costless and decentralized fashion, there is no pressing reason to set up, fund and oversee centralized think tanks or intramural collectives. Why add noise to a communication channel that already is sufficiently contaminated?

I’ll explain and develop this idea in the next post.

The ‘green’ economy: a fantasy fuelled by financialization

January 17, 2013

Timorously, even by the standards of scholarly journals, three economists recently ventured, with some hedging, to make the obvious critical point about ‘green growth’:

‘Greening’ economic growth discourses are increasingly replacing the catchword of ‘sustainable development’ within national and international policy circles. The core of the argument is that the growth of modern economies may be sustained or even augmented, while policy intervention simultaneously ensures sustained environmental stewardship and improved social outcomes…

[Yet] when judged against the evidence, greening growth remains to some extent an oxymoron as to date there has been little evidence of substantial decoupling of GDP from carbon-intensive energy use on a wide scale.

‘Sustainable development’ had been the favoured watchword of both policy elites and eco-activists for well over twenty years – at least since the UN’s Bruntland Report (1987) and Rio Earth Summit (1992), which established a Commission on Sustainable Development.

The chief feature of this term, like the slogan ‘green growth’, was that the noun nullified the adjective rather than being modified by it.

Claims of sustainability  where they were not simply a decorative adornment fit for PR consumption  veiled attempts to seize rural land and other resources  for ‘green’ development, resource extraction, ecotourism, etc.

Entities formed in the name of sustainable development include the World Business Council for Sustainable Development. This was created by the Swiss billionaire Stephan Schmidheiny, and has corporate members including Royal Dutch Shell, BP, DuPont, General Motors and Lafarge. (One of its projects is the Cement Sustainability Initiative).

Yet, according to a recent World Bank report, the post-Rio mantra of ‘sustainable development’, while suitably vapid and obfuscatory, was inadequately attentive to economic growth.

‘Inclusion and the environment’ were laudable areas of concern. But they had to be ‘operationalized’ via the instrument of green growth if they were to feed the hungry, etc.

Convenient then that later, amid the market euphoria and asset-price inflation of the late 1990s, PR slogans of ‘sustainability’ became slightly less measured and sober, taking on a more obviously hucksterish tone.

Cornucopian eco-futurists like Jeremy Rifkin (author of The Hydrogen Economy) suggested that a New Economy had become ‘decarbonized’, ‘weightless’ or ‘dematerialized’.

The New Economy, its embellishers said, had been liberated from geophysical constraints. Through the technological miracle of an information-based service economy, it appeared, for the first time since the birth of industrial capitalism, that growing output and labour productivity had been ‘de-coupled’ from higher energy intensity, more material inputs and increased waste byproducts. (Evidence showed otherwise.)

Ben McNeil - Green growth

In his 2012 presidential address to the Eastern Economic Association, Duncan Foley speaks of the reality behind this ‘green growth’ ideology:

Rosy expectations that information technology can produce a “new“ economy not subject to the resource and social limitations of traditional capitalism confuse new methods of appropriation of surplus value as rent with new methods of producing value.

Thus, he notes, the appearance of ‘delinking’ between aggregate output and energy use (of fossil fuels) is an artefact of the growing incomes of individuals and entities (e.g. bankers, holders of patents or copyright, insurers, real-estate developers) whose ownership rights entitle them to a share of spoils generated elsewhere.

But, since these individuals never step anywhere near a factory, mine, recording studio or barber shop, their revenue streams or salaries seem not to derive from any material process of production in which employees transform non-labour inputs into outputs (and waste byproducts).

Due to changes in the type of income-yielding assets held by the wealthy, the ultimate source of such property income (in the transfer of a surplus product generated elsewhere) has become less transparent, more opaque.

The royalties, interest, dividends, fees or capital gains enjoyed by such people seem to arise from their own ‘weightless’ risk-bearing, creativity, inventiveness, knowledge or ingenuity – much as rental payments accrued by a resource owner appear to be a yield flowing from the ‘productive contribution’ of land.

Revenues extracted by holders of intellectual property and litigators of IP violations, by owners and trader of financial assets, etc. create niches in which many other people find their means of livelihood and social existence.

Income accruing to these agents involves the redistribution of wealth created elsewhere, in productive activity. The larger the proportion of social wealth absorbed by these unproductive layers, the more plausibly does GDP appear to have become ‘de-coupled’ from its material foundations.

These individuals are then flattered and enticed by visions describing them as the advance guard of a clean, green future.

Let me first describe these ‘new methods of appropriation of surplus value’ before I explain how they have generated the mirage of ‘sustainable growth’.

To a large degree, what is conventionally described as the ‘knowledge economy’ is better understood as the enlargement and strengthening of intellectual property rights (patents, copyright, trademarks, etc.).

Among other things, this has involved the outsourcing of corporate R&D to universities, and the consequent commercialization of the university’s research function.

This required extending the patent system to the campus, as occurred in the United States with the 1980 passage of the Bayh-Dole Act and the 1982 creation of the Court of Appeals for the Federal Circuit, which hears patent infringement cases.

Fortification of IP in the name of the ‘information economy’ did not bring about any great flowering of scientific research, nor give some new deeper purpose to invention or discoveries. Ideas did not thereby abruptly become ‘drivers of economic growth’, any more than they had been during the times of James Watt, Eli Whitney, Karl Benz or Fritz Haber.

It simply allowed the conferral of proprietary rights to the pecuniary fruits of those inventions (the royalties or licence payments), and the creation of a vast contractual and administrative apparatus for pursuing, assigning, exchanging, litigating and enforcing those ownership rights.

Thus sprang up technology transfer offices, patent lawyers, etc.

This broad patent system also governed rights of use, applying new legal restrictions and bureaucratic encumbrances to research tools and inputs used in collaborative research (bailments, material transfer agreements, software evaluation licences, tangible property licences, etc).

Baroque obstacles of this sort, allowing the IP possessor to threaten denial of access to the invention or discovery, provide the patent holder with the bargaining power needed to appropriate a share of income generated by productive use of the invention.

What has changed, therefore, with the birth of the ‘knowledge economy’ in recent decades, has been the range of things susceptible to patenting (thus becoming a revenue-yielding asset), and the types of entity qualified to hold proprietary rights.

The enforcement of intellectual property rights (in biotechnology and pharmaceuticals, entertainment products, software, agriculture, computer code, algorithms, databases, business practices and industrial design, etc.) was globalized via the WTO’s 1994 TRIPs agreement.

This created ‘winner-take-all’ dynamics of competition in several markets.

The winner of a ‘patent race‘ could subsequently protect its market share and its monopoly revenue without needing to innovate or cut costs, because IP rights deterred entry by competitors (if they did not completely exclude them). Through a licence agreement or, even better, an IP securitization deal, the holder of a patent or copyright (e.g. a university patent office) could sit back and idly watch the royalties roll in rather than bothering themselves with the messy, risky and illiquid business of production.

Yale royalty deal

Economists have played a privileged role in commercializing university research, and transforming ‘discoveries’ into claims on wealth that entitle their holder (the university technology transfer office) to a portion of the surplus generated elsewhere (as licence fees or patent royalties).

The economist Rick Levin has been a prominent contributor to mainstream economic theory on the patent system. He recently served as co-chair of the US National Research Council’s Committee on Intellectual Property Rights in the Knowledge-Based Economy. In this capacity he has helped prepare a series of reports on the patent system, as part of submissions made for recent amendments of US patent law.

Levin has been president of Yale for the past twenty years, and like Larry Summers at Harvard his job has been to restructure the university so that scholarly research becomes a revenue-generating asset.

Below he can be watched at the Davos World Economic Forum: touting, as if at a trade fair, the wares of Yale’s ‘curiosity-driven research’, including in quantum computing.

Strengthened IP has not been the only ‘new form of appropriation’ to license the popular idea of a ‘dematerialized’ knowledge economy.

The creation during the 1980s of funded pension schemes, the decline in the rate of return on non-financial corporate capital and the removal of cross-border capital controls had increased the liquidity and global integration of capital markets.

From the mid-1990s, increased inflow of funds into stocks and other variable-return securities led to an asset-price boom that (by raising the value of collateral) increased the creditworthiness of borrowers.

In such circumstances, corporate managers could most safely make profits (and earn bonuses) through balance-sheet operations (buying and selling assets and liabilities at favourable prices) rather than engaging in production or commercial activities.

This meant that large, formerly productive transnational enterprises like GE now behaved much like a holding company: issuing debt or equities to fund portfolio investment, cutting interest costs by repaying liabilities, acquiring new subsidiaries and divesting themselves of others, etc.

As ready profits could be made without production or sales, firms became disinclined to pursue revenue in the old-fashioned way: by undertaking expenditure in productive investment, with funds tied up in fixed capital or infrastructure.

With less demand for borrowing to finance expanded operations or new investment, savings flowing into the financial system were not met with a corresponding outflow of funds. This drainage failure increased the volume of funds churning around the financial system (‘savings glut’) in search of speculative returns.

Parallel bubbles thus sprung up without any corresponding increase in investment in tangible capital equipment, machinery, tools or materials.

During the late 1990s ‘New Economy’ boom, valuation of paper claims on wealth (such as the equity prices of dot-com firms listed on the NASDAQ index) reached for the stars, as to a lesser extent did US GDP.

As measured output and labour productivity rose, it was attributed to firms investing in ‘clean’ information-processing equipment, software, intangible IP assets and ‘human capital’, and to an epochal technological step change: the New Economy.

Such were the circumstances in which the inane idea of a weightless economy, free of all material constraints, acquired enough plausibility (it doesn’t take much) to be used as a journalistic, publishing and academic catchphrase.

These surface developments were based on deep underlying causes, so the trend to financialization has since continued despite periodic interruptions: Clinton-era exuberance was punctured by 2000 and its revival expired in 2007.

Rising inequality and a shift in relative returns has prompted a change in the composition of portfolios and distribution of assets held by the wealthy.

In many advanced economies, the social surplus product (the material embodiment of class power) is less and less manifested in a productive stock of capital goods (buildings, equipment, machinery, tools).

Rising net worth, as measured by holdings of paper assets and accounts in electronic databases, eventually yields dividends, interest or capital gains. These may be recycled by employing an unproductive retinue of lawyers, consultants, managers, advertisers, security guards, etc.

Increasingly the surplus product is absorbed in such a manner, or embodied in luxury consumption goods and other types of unproductive expenditure (e.g. armaments).

But, for the most part, the assets of the property-owning classes circulate as excess savings through the financial system, generating market liquidity and bidding up prices.

Thus, during the most recent decade (and especially following the outbreak of economic crisis in 2007), the price of financial assets and other private claims on wealth have again appreciated while growth in employment, fixed investment and real productive capacity has stagnated.

The proportion of economic activity generated (according to national accounts) by ‘financial and business services’ and related ‘clean’ industries has accordingly risen. The share of value-added produced by manufacturing and other ‘dirty’ sectors has fallen.

In Australia,  so-called financial and insurance services now account for the largest share of measured output (10%) of any industry. During the decade to 2011, financial services grew faster than any other industry (professional services also grew swiftly during this period).

All this has meant that the propertied classes could now receive several varieties of property income (interest, dividends, royalties, rent, salaries, etc.) at a distance safely remote from any production process in which employees turned non-labour inputs into outputs.

To some extent, of course, this had been true for a century, ever since the separation of ownership and control. The birth of the modern corporation had brought the retirement of the ‘entrepreneur’ to a quiet life of coupon clipping, with management and supervision left to a class of paid functionaries.

But with the late twentieth-century growth of funded pension schemes, institutional investors and internationalized capital markets, ownership was dispersed (and capital ‘depersonalized’) to a far greater extent than ever before. Foreign residents could now hold shares almost anywhere, and firms could list their stock on several major exchanges at once, thus raising capital abroad on the deepest markets.

Even a single asset, not to speak of an entire portfolio, now often bundled together several income-yielding sources, the final origin (and riskiness) of which remained opaque to its owner.

The ultimate source of profit (and rent, interest, royalties, capital gains, etc.) in material production became less transparent still.

As well as sowing the illusion of ‘de-coupled growth’, these structural changes have posed practical problems for statisticians and economists who compile the national accounts and estimate the size of aggregate output or value-added.

Foley has noted elsewhere how the ‘output’ of banking, management and administration, insurance, real estate, financial, business and professional services (law, advertising, consulting, etc.) can’t be measured independently.

Instead, in the national accounts, the ‘output’ of these industries is simply imputed from the income paid to its members (e.g. the value of ‘financial intermediation services’ is indirectly measured by the margin between interest charged on loans and interest paid on deposits).

Hence a salary bonus paid to a bank executive is included in the net output of that industry, whereas a similar payment to, say, a manufacturing executive does not increase the measured value-added of manufacturing.

This lack of an independent measure of output suggests that the contribution of these industries to aggregate output is illusory.

They should be understood as unproductive: employees do not produce any marketable good or service (adding value by transforming inputs) that is consumed by other producers or serves as an input to production.

Their wages and salaries, therefore, are a deduction from the social surplus product (value-added minus the earnings of productive employees).

During the past century, most advanced capitalist countries have exhibited a secular rise in the proportion of employees in such occupations, devoted to the protection or exchange of property rights and the enforcement of contracts (rather than the production of goods and services).

This trend has accelerated over the past forty years, as accumulation of fixed capital has slowed because productive investment has become unprofitable.

In such circumstances, the surplus product must be absorbed (and aggregate demand maintained) by employing large armies of lawyers, managers, consultants, advertisers, etc. (as described above, this is accompanied by a binge of elite consumption spending on luxury yachts, hotels and private planes, and by armament production).

As with the incomes of the propertied classes themselves, the larger the proportion of social wealth absorbed by these unproductive, upper-salaried layers, the more will aggregate output be overestimated, and the more plausibly will GDP appear to have become ‘de-linked’ from its material foundations.

Moreover, the collective identity of the new middle classes is based on a self-regarding view of their own ‘sophisticated’ consumption habits, compared to those of the bas fonds. And prevailing ideology explains an individual’s high earnings by his or her possession of ‘human capital.’

Members of this upper-salaried layer need little convincing, therefore, to see themselves as the personification of a clean green knowledge economy.

It is thanks to these circumstances, taken together, that we now hear clamant and fevered talk about a ‘green economy’ and ‘renewable’ capitalism with growth ‘decoupled from resource use and pollution’. Here is described a ‘win-win situation’: a confluence of all objectives in which ‘tackling climate change’ creates ‘prosperity’ and ‘the best economies’.

‘Green growth’ is thus a fantastic mirage generated by asset bubbles, social inequality, rent extraction, and the growing power of the financial oligarchy. An apparent cornucopia appears as a free gift of nature and human ingenuity.

Yet paper (or electronic) claims to wealth merely entitle their bearer to a portion of the social surplus.

The material existence of that surplus, as with any future stream of consumption goods or services, still depends on a resource-using process of production that employs physical inputs (and generates waste). Service workers must inescapably eat, clothe themselves and travel from residence to workplace.

Thus, in reality, capitalism does face geophysical limits to growth and is temporally bounded.

With its systematic demand for constantly growing net output, capital accumulation and rising labour productivity, it brings increasingly automated methods of production (i.e. labour-saving capital-using technical change). This implies ever-greater energy intensity (more energy per unit employment) or higher energy productivity (through better quality energy).

Energy intensity and labour productivity

australia - total primary energy supply

Industrial capitalism thus requires a ‘promethean’ energy technology (one that produces more usable fuel than it uses). It depends also on the inflow of low-entropy fuels and the dissipation of waste to the peripheral regions of the world economy.

No element of the existing social order escapes this dependence, no matter how ethereal. Even the liquidity of US Treasury securities, which underpins the liquidity of world capital markets, is sustained by Washington’s military dominance of the Persian Gulf, other oil-rich regions and commercial sealanes.

There is no prospect of energy-saving technical change on the horizon. I’ve discussed before how so-called renewable energy sources present no alternative. Renewables are parasitic on the ‘material scaffold’ of fossil fuel inputs, since they are (compared to oil and coal) poor quality fuels with relatively low net energy yields.

That is why Nicholas Georgescu-Roegen declared that faith in so-called renewables evinced a hope of ‘bootlegging entropy’, linked to the fantasy of endless growth. A renewable, he said, is ‘a parasite of the current technology. And, like any parasite of the current technology, it could not survive its host.’

For the past two hundred years, fossil fuels and other material inputs have allowed industrial capitalism to escape the Malthusian trap and experience (localized) exponential growth. This has come at the ecological price of disrupting the carbon cycle, which has inflicted immense damage and now threatens catastrophe.

In these terrifying circumstances, the successful packaging of ‘green growth’ for zesty ideological consumption reveals the existence of deep political despair, widespread confusion and reality avoidance.

Above all, Pollyanism is rooted in complacent assumptions about another kind of ‘sustainability’: the permanent survival of the fundamental institutions of capitalism  privately owned capital goods, wage labour and production for profit  or the absence of feasible alternatives.

Growing up

January 15, 2013

Anyone with straws left out in the ideological wind will have noticed a recent uplifting gust of ‘green growth’ emanating from the policy-making elite (government agencies, supranational organizations, think tanks and NGOs) and its activist offshoots.

The spread of this mantra, now a commonplace of mind-numbing press releases, lofty communiqués and pious editorials, has been extraordinarily rapid.

In June 2009 the OECD council of ministers adopted a ‘Green Growth Strategy.’ This was described as satisfying two objectives, pari passu: ‘Growing one’s economy AND protecting the environment.’

‘Ministers’, said the OECD secretary-general, ‘acknowledged that green and growth can go hand in hand.’

In 2009 the South Korean government of Lee Myung-bak set up a Presidential Committee on Green Growth. As part of emergency efforts to sustain aggregate demand amid the global crisis, the ROK’s so-called Green New Deal involved $41 billion of construction spending.

Nearly half this expenditure went towards the Four Rivers Project of dam and weir building, dredging, river diversion, and the construction of ‘riverside villages’, lakes, islands and athletics fields for tourism. (Fiscal expansion was combined with a reduction of the corporate tax rate and privatization of five state enterprises.)

This demonstration of the president’s eco-smarts met with a worldwide downpour of garlands and applause. Sensing a useful vehicle to pursue Seoul’s diplomatic influence and the interests of local firms, Lee (a former Hyundai Engineering and Construction executive) established a regional ministerial network on green growth (the Seoul Initiative).

Then a Global Green Growth Institute was established, with Nicholas Stern and Thomas Heller given the titles and salaries of co-vice-chairs (Jeffrey Sachs sits on the board).

In 2011 the governments of Denmark and Mexico joined the ROK in creating the Global Green Growth Forum.

The latter entity, according to website effusions, will ‘propel a new wave of economic growth, paralleling the IT-revolution, and secure long-term economic growth, including development in emerging economies and developing countries while mitigating the risks of climate change.’

Meanwhile the Asian Development Bank released a 2011 report on ‘greening growth’ in Asia and the Pacific. The Australian Greens hope to ‘unleash knowledge-based industries and intelligent green growth.’ McKinsey consultants promise to ‘help clients find green growth solutions that balance value-creation and sustainability.’

Such is the humdrum linguistic Muzak of greenwash: no satire of its vapidity could improve on it.

Last year’s Rio+20 UN conference on sustainable development named a ‘green economy’ as one of the summit’s two ‘themes’. A guidebook to the Green Economy was prepared, noting how ‘in 2008, the term was revived in the context of discussions on the policy response to multiple global crises… the context of the financial crisis and concerns of a global recession.’

In 2012 a World Bank report was advertised as a ‘framework for countries to begin greening their growth.’


At the 2012 Business 20 (‘the B20’) conclave in Los Cabos, a Green Growth Action Alliance was ‘launched’.

Its founding members included ‘over 50 of the world’s largest energy companies, international financial institutions and development finance banks working to deliver greater investments into clean energy, transportation, agriculture and other green investments.’ This ‘new club of international financial institutions, development banks, companies, and private investors’ included Merrill Lynch, Deutsche Bank, Barclays, Morgan Stanley, HSBC, Zurich Insurance Group, Samsung, Nestlé, PepsiCo, Wal-Mart, Siemens, GDF Suez, Alcatel-Lucent, Eskom, McKinsey and Accenture.

This alliance, administered by the World Economic Forum’s Green Growth Partnerships Initiative, aimed to stimulate ‘green investment’ in ‘developing markets’, by promoting trade liberalization for ‘green goods and services’ while ‘dramatically increasing efforts to target public funding to leverage private investments’.

In other words, pools of private savings, along with ‘overseas development assistance’, would be mobilized for ‘innovative financing’ (i.e. public-private partnerships) in the energy, agriculture, water and transport industries of poor countries, such as the DESERTEC project in North Africa.

‘Engagement of the host-country government’ would be required to ‘develop tailored mechanism to de-risk investments.’

A PricewaterhouseCoopers employee described with bracing candour the ‘role of government’ in this ‘win-win opportunity’ of green growth in ‘developing markets’:

[The] ability to extract some money out of government can often make the difference for the private sector in terms of an investment that might be marginal and one which is profitable for them. So leveraging public finance in this way will be very important to companies.

Thus, in one of its guises, ‘green growth’ is simply a lever by which investors seek to gain access on favourable terms to profitable new outlets for their funds.

If this were all it was, it wouldn’t be especially noteworthy or unusual.

Private infrastructure investments (as well as those in transport, resource extraction, etc.) typically have a long time horizon, tying up vast amounts of illiquid capital for decades during which yields arrive slowly and may be comparatively low.

If such projects are to proceed at all, private firms embarking on them must be insured (usually by government guarantees, subsidies and tax breaks) against the risk that their future cash flows will become insecure (through obsolescence or expropriation), be insufficient to repay loans, or not prove profitable enough to be worth their while.

Such government backstops generally arrive if private investments can be framed or packaged as meeting some kind of social imperative.

And, of course, firms owned or domiciled in advanced capitalist countries regularly ask for, and receive, diplomatic, political or military help from their own imperial state when trying to penetrate new regions and markets and when dealing with potentially unhelpful foreign governments.

But there’s more than that to the embrace by the OECD, World Bank, WEF, etc. of ‘greening the economy’.

To see why, note how the WEF described climate change as a ‘diplomatic opportunity’:

An international narrative of  economic growth and a low-carbon future collectively presented by the governments of the major economies during 2009, in a leadership collaboration with international business, civil society and climate experts, would offer a positive, unifying and long-term multilateral agenda for both the economy and the climate, as well as a positive message for consumers and voters.

This affirmative and growth-based agenda would help the global public see how the long-term economic interests of major economies such as the USA, China, India, the EU and Japan are served by coming together around a shared set of objectives to drive forward a low-carbon global economy.

‘Green growth’ has thus arisen to perform an important ideological task for political rulers, ‘international business’ and ‘civil society’.

It provides an empyreal vision, assuring a worried and potentially restless ‘global public’ that all necessary changes are in train, that the remedy to ills caused by capitalist accumulation is found in capitalism itself, and thus that preserving the environment and capitalist social relationships are compatible and indeed complementary goals.

The conditions under which ‘green growth’ (and its forebear ‘sustainable development’) can fulfil this ideological role are worthy of examination, for these conditions are social rather than technical.

How does such thin, empty fare seem plausible or muster any persuasive force?

Amplification by the media, scholars and think tanks, and relentless ‘seeding’ by powerful actors in other communication channels, is one thing.

The boundless credulity and motivated ignorance of ‘activist’ groups is another. Most eco-activist groups cannot be sharply distinguished – either in policy goals, funding or personnel – from ‘green industry’ lobby groups whose members seek rents (subsidies, procurement contracts, etc) from government support for the ‘green economy’.

International NGOs like Greenpeace and the WWF gratefully attend exclusive conclaves such as Rio+20, the Davos Forum and OECD summits. At these convivial gatherings they issue agendas for a green economy and green growth, which are as glibly insubstantial and as respectful of existing economic institutions as the versions released by their corporate co-attendees.

On Australia’s domestic scene, the Climate Institute and Australian Conservation Foundation, along with the ACTU, ACOSS and the Australia Green Infrastructure Council, lobbied the Rudd government for a Green New Deal. The Wilderness Society, linked to the Greens rather than the ALP, also follows the fortunes of a ‘clean green new economy.’

Ben McNeil - Clean Industrial Revolution

Ben McNeil - Green growth

But the ideological appeal of ‘green growth’, and its ability to mobilize a constituency in its support, is more solidly rooted than mere mystification or self-interest can explain.

It is made possible by real social developments and economic changes that have occurred over the past few decades. For some (mostly property owners), an apparently ‘weightless’ new economy provides a stream of income (whether dividends, interest, rent, royalties, capital gains or salary) that seems no longer to depend on any resource-using material process of production.

Thus has been created a mirage of ‘decoupling’ between aggregate output and energy or material use. Unreliant on physical inputs, surely (it is said) such an economy (‘dematerialized‘, ‘decarbonized‘, etc.) escapes geophysical and other constraints, and is free to grow without limit?

I’ll explain this in the next  post.

Yelling at a machine

January 13, 2013

An ambitious Democrat prosecutor from Massachusetts is currently the target of personal criticism for ruthlessly destroying someone’s life to further her own political aspirations (and to enforce property rights).

This description refers to Carmen M. Ortiz, a US attorney for the Justice Department, who handled the indictment of Aaron Swartz for allegedly accessing vast numbers of academic papers from JSTOR without authorization.

But, except for the case particulars about the Internet and IP, the description might also apply to Martha Coakley (Massachusetts attorney general and failed candidate for the US senate), Thomas Reilly (her predecessor, later beaten by Deval Patrick for his party’s nomination as gubernatorial candidate) and Scott Harshbarger (Reilly’s predecessor as state attorney general, losing gubernatorial candidate and ex-president and CEO of Common Cause).

Middlesex DAs - Martha Coakley, Tom Reilly, Scott Harshbarger

The later three vaulted to prominence and sought higher office by railroading a family of Middlesex county day-care centre providers, in an infamous case alleging ritual child abuse, based on fantastic testimony elicited from children. (Such episodes of hysteria were common during the 1980s and early 1990s, when the mix of prurience, career opportunity and right-thinking sexual politics proved irresistible to some ‘progressive’ journalists, social workers, lawyers and psychologists.)

Harshbarger and Reilly conducted the original prosecution of the Amiraults and Coakley tried to prevent Gerald Amirault’s release.

In 1991 Coakley had been appointed head of the Middlesex DA office’s Child Abuse Protection Unit; in 2002 she established an Adult Sexual Assault Division and noisily prosecuted a priest.

Coakley later argued in support of another criminal conviction overturned by the US Supreme Court for Sixth Amendment violations, and delayed release of a wrongfully convicted man in a case later made into a Hollywood drama.

Ortiz thus has several forebears in the role of grubbily ambitious Massachusetts Democrat prosecutor. The habitual lack of probity displayed by such people follows, quite straightforwardly, from their professional incentives.

Aaron Swartz’s blog post about fundamental attribution error seems apposite:

[When] the system isn’t working, it doesn’t make sense to just yell at the people in it — any more than you’d try to fix a machine by yelling at the gears… When there’s a problem, you shouldn’t get angry with the gears — you should fix the machine.

Of course, a society isn’t a machine, and the role of lawyers in it isn’t subject to tinkering (by whom?), corrective repair or gradual amendment.

In the contemporary United States, the social privileges enjoyed by elite members of the legal profession follow, in part, from an institutional evolution that took place long ago, transforming property rights, technology and the state.

The foundation of modern US tort law was bound up with changes to ownership rights, the development of mechanized industry and the status of juries and the bar. This transition was described by Morton Horwitz in his classic analyses of US law between the War of Independence and the Civil War.


As Horwitz described it, this period involved the ‘overthrow of eighteenth century pre-commercial and anti-developmental common law values’:

As political and economic power shifted to merchant and entrepreneurial groups in the post-revolutionary period, they began to forge an alliance with the legal profession to advance their own interests through a transformation of the legal system.

Decisive changes occurred over the question of water rights with the development of textile, paper and saw mills in New England, New York and Pennsylvania (the first being Samuel Slater’s water-powered mill in Pawtucket).

‘Under the Mill Acts, an owner of a mill situated on any non-navigable stream was permitted to raise a dam and permanently flood the land of all his neighbors, without seeking prior permission’:

[The] law of negligence became a leading means by which the dynamic and growing forces in American society were able to challenge and eventually overwhelm the weak and relatively powerless segments of the American economy. After 1840 the principle that one could not be held liable for socially useful activity exercised with due care became a commonplace of American law. In the process, the conception of property gradually changed from the eighteenth century view that dominion over land above all else conferred the power to prevent other’s from interfering with one’s quiet enjoyment of property to the nineteenth century assumption that the essential attribute of property ownership was the power to develop one’s property regardless of the injurious consequences to others…

Anticipating a widespread movement away from property theories of natural use and priority, they introduced into American common law the entirely novel view that an explicit consideration of the relative efficiencies of conflicting property uses should be the paramount test of what constitutes legally justifiable injury. As a consequence, private economic loss and judicially determined legal injury, which for centuries had been more or less congruent, began to diverge.

Slater's water-powered textile mill 1793

Water-powered mills, by compelling changes in the rights and obligations of property owners, also implied changes in the scope and nature of liability incurred by failure to uphold duties:

At the beginning of the nineteenth century there was a general private law presumption in favour of compensation, expressed by the oft-cited common law maxim sic utere. For Blackstone, it was clear that even an otherwise lawful use of one’s property that caused injury to the land of another would establish liability in nuisance, “for it is incumbent on him to find some other place to do that act, where it will be less offensive.”

In 1800, therefore, virtually all injuries were still conceived as nuisances, thereby invoking a standard of strict liability which tended to ignore the specific character of the defendant’s act. By the time of the Civil War, however, many types of injury had been reclassified under a “negligence” heading, which had the effect of substantially reducing entrepreneurial liability. Thus the rise of the negligence principle in America overthrew basic eighteenth century private law categories and led to a radical transformation not only in the theory of legal liability but in the underlying conception of property on which it was based.

Meanwhile the social position of lawyers and judges was elevated:

One of the most important consequences of the increased instrumentalism of American law was the dramatic shift in the relationship between judge and jury that began to emerge at the end of the eighteenth century. Although colonial judges had developed various techniques for preventing juries from requiring verdicts contrary to law, there remained a strong conviction that juries were the ultimate judge of both law and facts. And since the problem of maintaining legal certainty before the Revolution was largely identified with preventing political arbitrariness, juries were rarely charged with contributing to the unpredictability or uncertainty of the legal system. But as the question of certainty began to be conceived of in more instrumental terms, the issue of control of juries took on a new significance. To allow juries to interpret questions of law, one judge declared in 1792, “would vest the interpretation and declaring of laws, in bodies so construed, without permanences, or previous means of information, and thus render laws, which ought to be an uniform rule of conduct,  uncertain, fluctuating with every change of passion and opinion of jurors, and impossible to be known till pronounced.” Where eighteenth century judges often submitted a case to the jury without any directions or with contrary instructions from several judges trying the case, nineteenth century courts became preoccupied with submitting clear directions to juries…

Juries were sidelined as certified legal professionals arrogated to themselves the exclusive right to decide on questions of law:

One of the phenomena that has most puzzled historians is the extraordinary change in the position of the postrevolutionary American Bar… In the period between 1790 and 1820 we see the development of an important set of relationships that made this position of [political and social] domination: the forging of an alliance between legal and commercial interests…

The leaders of the Bar in the period after 1790 are not the land conveyancers or debt collectors of the earlier period, but for the first time, the commercial lawyers…

[One] of the leading measures of the growing alliance between bench and bar on the one hand commercial interests on the other is the swiftness with which the power of the jury is curtailed after 1790.

Three parallel procedural devices were used to restrict the scope of juries. First, during the last years of the eighteenth century American lawyers vastly expanded the “special case” or “case reserved”, a device designed to submit points of law to the judges while avoiding the effective intervention of a jury…

A second crucial procedural change – the award of a new trial for verdicts “contrary to the weight of the evidence” – triumphed with spectacular rapidity in some American courts at the turn of the century. The award of new trials for any reason had been regarded with profound suspicion by the revolutionary generation… Yet, not only had the new trial become a standard weapon in the judicial arsenal by the first decade of the nineteenth century; it was also expanded to allow reversal of jury verdicts contrary to the weight of the evidence, despite the protest that “not one instance… is to be met with” where courts had previously reevaluated a jury’s assessment of conflicting testimony…

These two important restrictions on the power of juries were part of a third more fundamental procedural change that began to be asserted at the turn of the century. The view that even in civil cases “the jury [are] the proper judges not only of the facts but of the law that [is] necessarily involved” was widely held even by conservative jurists at the end of the eighteenth century…

During the first half of the nineteenth century, however, the Bar rapidly promoted the view that there existed a sharp distinction between law and fact and a correspondingly clear separation of function between judge and jury. For example, until 1807 the practice of Connecticut judges was simply to submit both law and facts to the jury, without expressing any opinion or giving them any direction on how to find their verdict. In that year, the Supreme Court of Errors enacted a rule requiring the presiding trial judge, in charging a jury, to give his opinion on every point of law involved. This institutional change ripened quickly into an elaborate procedural system for control of juries…

The subjugation of juries was necessary not only to control particular verdicts but also to develop a uniform and predictable body of judge-made commercial rules.

Not until the nineteenth century did judges regularly set aside jury verdicts as contrary to law. At the same time, courts began to treat certain questions as “matters of law” for the first time. …

By, 1812… in a decision that expressed the attitude of nineteenth century judges on the question of damages, Justice Story refused to allow a damage judgement on the ground that the jury took account of speculative factors that “would be in the highest degree unfavourable to the interests of the community” because commercial plans would be involved in utter uncertainty.” As part of this tendency, judges began to take the question of damages entirely away from juries in eminent domain proceedings… Finally, as part of the expanding notion of what constituted a “question of law” courts for the first time ordered new trials on the ground that a jury verdict was contrary to the weight of the evidence, despite the protest that “not one instance… is to be met with” where courts had previously reevaluated a jury’s assessment of conflicting testimony.

(In our present day, such anti-democratic contempt for popular judgements is embodied in someone like Cass Sunstein.) This was a betrayal of the revolutionary legacy and its animating Enlightenment principles:

By 1820 the legal landscape in America bore only the faintest resemblance to what existed forty years earlier. While the words were often the same, the structure of thought had dramatically changed and with it the theory of law. Law was no longer conceived of as an eternal set of principles expressed in custom and derived from natural law. Nor was it regarded primarily as a body of rules designated to achieve justice only in the individual case. Instead, judges came to think of the common law as equally responsible with legislation for governing society and promoting socially desirable conduct. The emphasis on law as an instrument of policy encouraged innovation and allowed judges to formulate legal doctrine with the self-conscious goal of bringing about social change….

Thus, the intellectual foundation was laid for an alliance between common lawyers and commercial interests. And when in 1826 Chancellor Kent wrote to Peter DuPonceau about the arrangement of his forthcoming Commentaries, he underlined the extent to which he would pay attention only to decisions of the courts of commercial states…

As the Bar was molding legal doctrine to accommodate commercial interests… the mercantile interest for the first time was required to recognize the legal primacy of the Bar.

The historical lesson that technical innovations (e.g. development of the water-powered mill) sometimes bring changes in property rights (and thus alter the role of lawyers) has obvious contemporary relevance.

In 1996 the economist Kenneth Arrow discussed how technical features of information as a commodity had brought about innovations in property law (IP) to preserve the exclusive rights of owners.

He nonetheless suggested that technical innovation called into doubt the very future of an economy (capitalism) built on private ownership of capital goods, the employment of propertyless workers, and the interaction through decentralized market exchange of discrete production units (firms):

Once obtained, it [information] can be used by others, even though the original owner still possesses it. It is thus fact which makes it difficult to make information into property. It is usually much cheaper (not, however, free) to reproduce information than to produce it… Two social innovations, patents and cooperates, are designed to create artificial scarcities where none exists naturally…

The ability of information to move cheaply among individuals and firms has analogues with one class of property, called fugitive resources. Flowing water and underground liquid resources (oil or water) cannot easily be made into property. How does one identify ownership, short of labelling each molecule? … It is for this reason that water has always been recognized as creating a special property problem and has been governed by special laws and judicial decisions…

Let me conclude with some conjectures about the future of industrial structure. Information overlaps from one firm to another, yet the firm has so far seemed sharply defined in terms of legal ownership. I would forecast an increasing tension between legal relations and fundamental economic determinants. Information is the basis of production, production is carried out in discrete legal entities, yet information is a fugitive resource, with limited property rights.

Small symptoms of these tensions are already appearing in the legal and economic spheres. There is continual difficulty in defining intellectual property; the US courts and Congress have come up with some strange definitions. Copyright law has been extended to software, although the analogy with books is hardly compelling. There are emerging obstacles with mobility of technical personnel; employers are trying to put obstacles in the way of future employment which would in any way use skills and knowledge acquired in their employ.

These are still minor matters, but I would surmise that we are just beginning to face the contradictions between the system of private property and of information acquisition and dissemination.