Archive for February, 2010

Unconventional weapons

February 27, 2010

Diggers fighting in Afghanistan lose frozen sperm fight‘… just what are they doing over there?!

The Herald Sun took a stand for “our Diggers” this week, resulting in some amusing lines. “Our boys are heading off fighting for freedom and their country… You’d think Defence could pay a few dollars to freeze some sperm for them,” said one Digger’s mama in the article above.

The editorial, ‘Don’t Desert the Diggers’, blustered that “the policy is more akin to attitudes in the early 20th century that have survived to trample the technological and medical advances of the 21st century.” If someone can explain to me what that means, I would be grateful.

Advertisements

How thick is your cortex?

February 25, 2010

Support for the parieto-frontal integration theory of general intelligence in this open-access PNAS paper, Distributed neural system for general intelligence revealed by lesion mapping:

General intelligence (g) captures the performance variance shared across cognitive tasks and correlates with real-world success. Yet it remains debated whether g reflects the combined performance of brain systems involved in these tasks or draws on specialized systems mediating their interactions. Here we investigated the neural substrates of g in 241 patients with focal brain damage using voxel-based lesion–symptom mapping. A hierarchical factor analysis across multiple cognitive tasks was used to derive a robust measure of g. Statistically significant associations were found between g and damage to a remarkably circumscribed albeit distributed network in frontal and parietal cortex, critically including white matter association tracts and frontopolar cortex. We suggest that general intelligence draws on connections between regions that integrate verbal, visuospatial, working memory, and executive processes.

General intelligence (Spearman’s g) is just about the most controversial topic out there, from Gould’s Mismeasure of Man and The Bell Curve to more recent contributions (for and against). Positive correlation between test results across a range of cognitive tasks suggest something links verbal skills, spatial reasoning and memory (but not face recognition). So the debate is more or less about the status of factor analysis.

Direct democracy and budget decisions

February 23, 2010

Paul Cockshott and Karen Renaud have a new paper on electronic direct democracy, “Extending Handivote to Handle Digital Economic Decisions”. It broadens their previous work on standard “yes/no” plebiscites to include voting systems where responses can assume a range of values, multiple issues are decided upon, and there are functional dependencies between items (where, say, the chosen tax level constrains the range of possible expenditures). Among other things, they demonstrate a procedure for finding the optimal feasible mix of expenditure compatible with a balanced budget.  

There is now no technical reason for the broad spending priorities of governments (x% on health, y% on education) to be decided by a professional political class rather than the populace as a whole. If you like the idea of democracy, electoral representation is, at best, a lossy compression of voters’ policy preferences. Multi-candidate ballots are low-bandwidth channels (on the face of it, a vote for Obama over McCain conveys log2(2)=1 bit of information), while the opinions of citizens over all political issues contain a huge amount of information. Elections work by exploiting redundancy in voters’ preferences. As everyone knows, political beliefs on different issues aren’t independent; they are often highly correlated, as with for example stances on abortion rights and attitudes towards global warming. The existence of mutual information between preferences (if someone wants x, there’s an increased likelihood that they also want y) allows the political signal to be compressed into party affiliation or support for some candidate. (The distribution of voter’s favourite “ideal points” in a multidimensional policy space thus maps roughly to a one-dimensional ideological spectrum, which forms the basis for the well-known median-voter theorem.)

This not-terrible system for transmitting popular wishes might have satisficed in the 19th century. But there’s just no point to it now, other than to protect the social privileges of certain groups.

She’ll be right

February 18, 2010

It’s well-known that people have trouble measuring risk.

A study from 1978, ‘Judged Frequency of Lethal Events‘, showed that, when estimating the relative probability of various dangerous events (homicide or suicide, accident or disease), subjects used the availability heuristic. The more readily an event could be brought to mind – due to vividness or media coverage – the more likely it was considered to be.

So the incidence of spectacular disasters (plane crashes, terrorist attacks) is overestimated, while less dramatic but more common causes of death are underestimated.

For some reason, the former (overestimation) variety of error gets most of the attention.

Over the past few years people like Frank Furedi have started banging on about a ‘culture of fear‘, of successive panics aroused over global warming, BSE, Y2K bug, terrorism, SARS, avian flu, etc. Furedi, it’s true, is a crank, but more worthwhile studies have also shown how hysterical media reporting induces availability bias by exaggerating new health threats.

Meanwhile Gerd Gigerenzer famously showed that excess US road accident fatalities (353) in the last 3 months of 2001, probably due to people avoiding air travel, outnumbered the 266 passengers and crew killed while flying on 11 September. Despite the hyping of terrorism’s threat by governments and media, the lifetime chance of death from a terrorist attack is roughly equal to the risk of dying from another low-probability, high-consequence event like an asteroid strike.

Clearly, given the tendency of risk perception to outrun objective probability, reality checks are necessary.

Unfortunately, somewhere along the line contrarian warnings against ‘alarmism’ have acquired a kind of prestige, and it’s become a marker of high status to suggest that nothing bad can ever happen. H1N1?  Scaremongering by the media and WHO, both in the pay of big pharma. Global warming? Overblown.

Somebody who wrote a book called The Existential Jesus can, with a straight face, intone against ‘causes that attract pseudo-religious enthusiasm and intellectual fanaticism‘, supported by ‘prophets of doom and the language of apocalypse’.

For such ‘sceptics’, political ideology is often at work, along with a desire to signal their stiff upper lip and intellectual supremacy: I won’t be duped by Al Gore’s mind control! Can’t scare me!

But these examples also show the other side of availability bias: underestimating the frequency of the non-salient.

Almost nobody can remember a high-mortality flu pandemic that disproportionately killed young adults, so we consider it impossible; none of us has ever actually observed human climate forcing, so we dismiss it. These are not even what Nassim Taleb calls ‘Black Swans‘: they are merely beyond our immediate experience, and that’s enough to reduce our risk perception.

One strange example of this tendency is a backlash against ‘peanut allergy hysteria.’

Over the past few decades, the incidence of diagnosed peanut allergies in Australia, the US and the UK appears to have increased sharply. Allergic diseases generally seem to have become much more prevalent since the Second World War. But nut allergies get a lot of attention due to anaphylaxis and a few schoolkid deaths.

It seems likely that parents exaggerate the risk of peanut-related deaths due to the availability heuristic. Some schools have responded with extreme allergen-avoidance measures, like banning nuts completely.

This has led Slate to wonder ‘Are nut allergies taking over the planet? and the New York Times to consider ‘Are Nut Bans Promoting Hysteria? The point is arguable, though much of it is delivered in a silly our-way-of-life-is-non-negotiable style. (‘Taking precautions for the radically sensitive, however, means asking a lot of people to change their behavior.’)

Yet behind the opposition to ‘peanut hysteria‘ seems to be an intuitive refusal to believe that incidence of food allergies or anaphylaxis could be rising. Joel Stein of the Los Angeles Times writes in ‘Nut Allergies – A Yuppie Invention‘ that ‘[your] kid doesn’t have an allergy to nuts. Your kid has a parent who needs to feel special… But unless you’re a character on Heroes, genes don’t mutate fast enough to have caused an 18% increase in childhood food allergies between 1997 and 2007… [It] is strange how peanut allergies are only an issue in rich, lefty communities.’

Nobody seems to doubt the rising prevalence of asthma (asthma rates in Australia doubled between 1982 and 1992) or eczema, conditions which are related to food allergies. Perhaps the difference in attitudes can be explained by the higher population frequency of asthma and eczema, which leave more people with a personal experience (a friend, relative or themselves) of these conditions, and less reason to doubt their legitimacy.

In contrast, I suspect most people find food-induced anaphylactic shock not vivid but unimaginable. Where asthma has through familiarity become domesticated and somewhat mundane, death by peanut seems something that could only be dreamed up by hysterical, overprotective middle-class parents.

Sokal outclassed

February 10, 2010

Behold: the Fat Studies Reader. From the chapter “Sitting Pretty: Fat Bodies, Classroom Desks and Academic Excess” by Ashley Etrick and Derek Attig: 

The relationship between classroom desks and disciplinary practices that seek to form and control ‘size and general configuration’ is evident: the hard materials and unforgiving shapes of these desks punish student bodies that exceed their boundaries with pain and social shame. Some fat students are unable or unwilling to subject their bodies to the disciplinary powers of desks and must sit elsewhere. In these cases, desks can threaten fat students’ very identities as students; if their bodies cannot fit into structures that signify their intellectually receptive status, then they are, symbolically at least, unable to learn.  Homogenous thinness is rewarded with comfort and various other privileges accorded to those granted identification as both students and normal. In these ways, classroom desks control body size and thereby produce the ideal thin student.

Personally I feel the material overlaps with “Access to the Sky: Airplane Seats and Fat Bodies as Contested Spaces” by Joyce L. Huff.

I’m yet to read “Jiggle In My Walk: The Iconic Power of the ‘Big Butt’ in American Pop Culture” by Wendy A. Burns-Ardolino.

Neoclassical equilibrium is computationally intractable

February 4, 2010

The Arrow-Debreu existence proof for competitive equilibrium (1954) is probably the most famous paper in modern economics. It shows that, given transitive preferences and convex production sets, there exists a set of prices such that all markets clear. In principle, therefore, it’s possible for markets to converge on a unique point that brings the entire economy into balance. Whether or not they actually do, however, depends on finding an algorithmic implementation: a search procedure that arrives at the optimal solution within a reasonable running time.  

Is there a polynomial-time algorithm for computing general equilibrium? Herbert Scarf has been working on the problem since the 1960s. More recently, Xiaotie Deng and Li-Sha Huang have shown it to be NP-hard: the process has a running time that increases exponentially with the size of the problem (i.e. the number of agents or goods in the economy). Now, from the latest PNAS, comes yet another demonstration that neoclassical equilibrium is computationally intractable:

We show that there is no discrete-time price-adjustment mechanism (any process that at each period looks at the history of prices and excess demands and updates the prices) such that for any market (a set of goods and consumers with endowments and strictly concave utilities) the price-adjustment mechanism will achieve excess demands that are at most an ϵ fraction of the total supply within a number of periods that is polynomial in the number of goods and 1/ϵ. This holds even if one restricts markets so that excess demand functions are differentiable with derivatives bounded by a small constant. For the convergence time to the actual price equilibrium, we show by a different method a stronger result: Even in the case of three goods with a unique price equilibrium, there is no function of ϵ that bounds the number of periods needed by a price-adjustment mechanism to arrive at a set of prices that is ϵ-close to the equilibrium.

Why is this important? A major objection to the notion of socialist planning – due to Hayek – is that a centralised planning computer is inferior to decentralised market processes performed by a distributed collection of human ‘computers’. This is to understand the economy as a giant information processor, which performs a search procedure for optimal points where resource allocation is efficient, welfare is maximised and the sum of all excess demands is zero. According to Hayek, this problem is scaled beyond the capacity of any planning agency. Now we see that a market economy also lacks the computational resources even to approximate such optima. The concept of mechanical equilibrium, a point in phase space that balances all economic forces, stands exposed as a mere figment: theoretically worthless, and a feeble tool for any polemicist arguing for the value of one economic system over another.