Published on

Notes on Gordon Lecture - 1

Lecture link

An example

  1. mm agents, nn goods; ii, jj are indices for agents and goods, respectively.

  2. Variables: piRnp_i \in \mathbb{R}^n for production, and ciRnc_i \in \mathbb{R}^n for consumption.

  3. Functions: di(ci)d_i(c_i) for utility of consumption (dd for demand), and si(pi)s_i(p_i) for cost of production (ss for supply).

maxpi,ciidi(ci)si(pi)\max{}_{p_i, c_i} \sum_{i} d_i(c_i) - s_i(p_i)

s.t. ipi=ici\sum_{i} p_i = \sum_{i} c_i (i.e., the market clears)

The above program would be easier to solve if one can optimize each agent's cic_i and pip_i independently. Nevertheless, this is generally not the case due to the violation of clearing constraint. Here, we note that the maximum objective of optimizing each agent's cic_i and pip_i independently, while ignoring the clearing constraint, is always an upper bound for the optimal solution to the original problem.

Competitive equilibria. One wants to twist the program, such that:

  1. We optimize each agent's objective independently without caring about the clearing constraint.

  2. The clearing constraint is still somehow satisfied.

To do this, one can introduce a notion of price λj\lambda_j for each item j[n]j \in [n]. Note that such a λj\lambda_j is not a variable to optimize by our program.

Given an instance of vector λ\lambda, consider the following new optimization problem:

Program 2.

imaxpi,ci(di(ci)λci)(si(pi)λpi)\sum_{i} \max{}_{p_i, c_i} \left( d_i(c_i) - \lambda \cdot c_i \right) - \left( s_i(p_i) - \lambda \cdot p_i \right)

where \cdot denotes the dot product. This program optimizes each agent's objective independently. Nevertheless, it is not immediately clear that solving program 2 gives an optimal solution to the original program.

The key is that, assuming goods are divisible, there always exists a λj\lambda^{*}_j for each item j[n]j \in [n], such that, when given the resulting vector λ\lambda^{*} (bundle of prices), the optimal solution to program 2 satisfies the clearing constraint. To intuitively see this, consider the overall supply and demand (multi-variable) functions, where the inputs to each function is λ\lambda, and the outputs are accumulated production and consumption obtained by solving program 2 under λ\lambda. A clearing price vector is λ\lambda^{*} is obtained where these two curves intersect. This λ\lambda^{*} is often referred to as a Walrasian equilibrium.

Now, suppose we have such a vector λ\lambda^{*} given, let cic_i^{*} and pip_i^{*} be the optimal solutions to program 2 for each i[m]i \in [m]. We know that the clearing constraint is satisfied, i.e., ici=ipi\sum_{i} c_i^{*} = \sum_{i} p_i^{*}. Thus, the objective value of program 2 can be rewritten as:

i(di(ci)λci)(si(pi)λpi)=i(di(ci)si(pi))i(λciλpi)=i(di(ci)si(pi))λi(cipi)=i(di(ci)si(pi))\begin{align*} \sum_{i} \left( d_i(c_i^*) - \lambda \cdot c_i^* \right) - \left( s_i(p_i^*) - \lambda \cdot p_i^* \right) &= \sum_{i} \left( d_i(c_i^*) - s_i(p_i^*) \right) - \sum_{i} \left( \lambda \cdot c_i^* - \lambda \cdot p_i^* \right)\\ &= \sum_{i} \left( d_i(c_i^*) - s_i(p_i^*) \right) - \lambda \cdot \sum_{i} \left( c_i^* - p_i^* \right)\\ &= \sum_{i} \left( d_i(c_i^*) - s_i(p_i^*) \right) \end{align*}

As a result, under λ\lambda^{*}, an optimal solution of program 2 is also an optimal solution to the original program.

To find a λ\lambda^{*}

One possible way is to use the tatonnement algorithm, which incrementally adjust λ\lambda (based on a learning rate) until the market clears. The algorithm can be stated as follows:

  1. Initialize λ\lambda to some arbitrary value, say all zeros.

  2. Repeat the following steps (kk denotes iteration index) until iciipiϵ|\sum_{i} c_i - \sum_{i} p_i| \leq \epsilon:

    1. Solve program 2 under λ\lambda.
    2. λ=λ+k(iciipi)\lambda = \lambda + \ell_k (\sum_{i} c_i - \sum_{i} p_i), where k\ell_k is a learning rate.
  3. Return λ\lambda.

Remark. The above algorithm is not guaranteed to reach a equilibria [Ackerman, Journal of economic methodology, 2002].

A common form of optimization

  1. Objective function: minxf(x)\min_x f(x)

  2. Inequality constraints: gi(x)0g_i(x) \leq 0 for i[m]i \in [m]

  3. Equality constraints: hj(x)=0h_j(x) = 0 for j[n]j \in [n]

  4. Variables: xHdx \in \mathbb{H}^d for some set H\mathbb{H}.

The rest of the lecture is a brief overview of other examples of optimization problems, see here.