The Arrow-Debreu existence proof for competitive equilibrium (1954) is probably the most famous paper in modern economics. It shows that, given transitive preferences and convex production sets, there exists a set of prices such that all markets clear. In principle, therefore, it’s possible for markets to converge on a unique point that brings the entire economy into balance. Whether or not they actually do, however, depends on finding an algorithmic implementation: a search procedure that arrives at the optimal solution within a reasonable running time.

Is there a polynomial-time algorithm for computing general equilibrium? Herbert Scarf has been working on the problem since the 1960s. More recently, Xiaotie Deng and Li-Sha Huang have shown it to be NP-hard: the process has a running time that increases exponentially with the size of the problem (i.e. the number of agents or goods in the economy). Now, from the latest *PNAS*, comes yet another demonstration that neoclassical equilibrium is computationally intractable:

We show that there is no discrete-time price-adjustment mechanism (any process that at each period looks at the history of prices and excess demands and updates the prices) such that for any market (a set of goods and consumers with endowments and strictly concave utilities) the price-adjustment mechanism will achieve excess demands that are at most an

ϵfraction of the total supply within a number of periods that is polynomial in the number of goods and 1/ϵ. This holds even if one restricts markets so that excess demand functions are differentiable with derivatives bounded by a small constant. For the convergence time to the actual price equilibrium, we show by a different method a stronger result: Even in the case of three goods with a unique price equilibrium, there is no function ofϵthat bounds the number of periods needed by a price-adjustment mechanism to arrive at a set of prices that isϵ-close to the equilibrium.

Why is this important? A major objection to the notion of socialist planning – due to Hayek – is that a centralised planning computer is inferior to decentralised market processes performed by a distributed collection of human ‘computers’. This is to understand the economy as a giant information processor, which performs a search procedure for optimal points where resource allocation is efficient, welfare is maximised and the sum of all excess demands is zero. According to Hayek, this problem is scaled beyond the capacity of any planning agency. Now we see that a market economy also lacks the computational resources even to approximate such optima. The concept of mechanical equilibrium, a point in phase space that balances all economic forces, stands exposed as a mere figment: theoretically worthless, and a feeble tool for any polemicist arguing for the value of one economic system over another.

Tags: computer science, neoclassical economics, socialist planning

February 13, 2010 at 4:08 am |

Hehe. PNAS.