Published on

Notes on Roughgarden AGT Lecture - 4

Lecture link

Knapsack setting

  1. Single-parameter environment.

  2. nn bidders, each with a weight wiw_i and a valuation viv_i.

  3. The seller with a capacity WW.

  4. The feasible allocation set XX is the set of 00-11 vector where ixiwiW\sum_{i} x_i w_i \leq W for all xXx \in X.

An example of this setting is TV ads bidding for a (long) commercial break. Each ads has it own duration (weight wiw_i), and WW is the total length of the break.

A mechanism for Knapsack auction

Recall that an ideal mechanism satisfies the following:

  1. DSIC: Bidding valuation is a dominant strategy (and IR: utility is never negative).

  2. Maximized social surplus: ixivi\sum_{i} x_i v_i

  3. The auction runs in polynomial time.

Allocation rule

First, assuming (1) holds, we want an allocation rule that maximizes social surplus. Observe that, any allocation rule

x(b)=arg maxxXbixix(b) = \argmax_{x \in X} b_i x_i

solves the underlying knapsack problem. Thus, no surplus-maximizing allocation rule is efficient (unless P = NP). It follows that an ideal mechanism that satisfies all three properties simultaneously does not exist, unless P = NP.

Payment rule

Note that the allocation rule above is monotone. By Myerson's Lemma, (in our single-parameter setting) there is a payment rule that implements it. Fix a player ii and bib_{-i}, it is not hard to see that xi(,bi)x_i(\cdot, b_{-i}) is a step function w.r.t. bib_i, with only one break point zz, moving the allocation xi(b)x_i(b) from 00 to 11. Thus, if bizb_i \leq z, the payment is 00, otherwise, the payment is zz.

Note that finding such a zz given wiw_i and bib_{-i} involves solving another instance of knapsack. Thus, the payment function itself cannot be computed in polynomial time, unless P = NP.

Note. Note that Myerson's Lemma does not concerns running time nor surplus guarantee. It only says that, if an allocation rule is monotone, there is a payment rule that implements it (i.e., DSIC).

Approximated solutions

The dominant paradigm in algorithmic mechanism design is to relax the second constraint (optimal surplus) as little as possible, subject to the first (DSIC) and third (polynomial-time) constraints. For single parameter environments, Myerson’s Lemma implies that the following goal is equivalent: design a polynomial-time and monotone allocation rule that comes as close as possible to maximizing the social surplus.

Question. [10:26AM - 04/30/2025]

Why the following goal:

Design a polynomial-time and monotone allocation rule that comes as close as possible to maximizing the social surplus.

is equivalent to finding a mechanism that is DISC, efficient, and approximate the surplus.

My main question is the efficiency: why a poly-time allocation rule implies its implementing payment rule is also poly-time? For instance, consider a monotone piece-wise step allocation function that is efficient, it is possible that the breakpoints are hard to compute?

My current intuition is that if the allocation rule xx is efficient (suppose x(,bi)x(\cdot, b_{-i}) is a piece-wise step function), to find all breakpoints that come before a bid bib_i (fixing (bib_{-i})), one can do binary search over [0,bi][0, b_i].

Overall, the problem becomes designing an allocation algorithm that approximate the optimal surplus, where the algorithm should be monotone w.r.t each bid (while other bids are fixed).

Myerson's Lemma implies that algorithmic mechanism design boils down to algorithm design in an oddly restricted (via monotonicity) “computational model” — the entire game-theoretic aspect of the design goal is neatly compiled into a relatively intuitive extra constraint on the allocation rule.

It should be clear what the holy grail in algorithmic mechanism design is: for as many NP-hard problems of interest as possible, to match the best-known approximation guarantee for (not necessarily monotone) approximate surplus maximization algorithms — or even the best-possible approximation guarantee, subject to PNPP \neq NP . That is, we would like the DSIC/monotone constraint to cause no additional surplus loss, beyond the loss we already have to suffer due to the polynomial-time constraint.

Remark. An allocation rule that maximizes social surplus must be monotone, thus implemented by a payment rule.

An allocation rule that approximate social surplus

Consider the following allocation algorithm:

Algorithm 1. Given a bid vector b\vec{b}, weight vector w\vec{w}, and total capacity WW.

  1. Sort bidders by bi/wi\vec{b}_i / \vec{w}_i (i.e., bang-per-buck) in non-descending order. Let Π\Pi be the new order of bidders.

  2. Select bidders (i.e. set xi=1x_i = 1) in this order Π\Pi until violating WW

    • Note that we should skip bidders if its weight is too high and move to the next succeeding bidder under Π\Pi.

    • Let NN' be the resulting set of selected bidders.

  3. Let obj(N)obj(N') denote the surplus in VV (i.e., the sum of the bids over bidders in NN'.)

  4. Return max{b,q}\max{} \{b^*, q \}, where bb^* is the highest bid in b\vec{b}.

I think that the argument of the approximation guarantee (presented below) is not exactly the same as the one in the Prof. Roughgarden's lecture (where in our argument, we use the fact that the creation of NN' is maximal. TBH, I didn't really read the proof in the lecture note. As this 22-approximation is quite standard, any proof works), but the ideas should be almost the same.

Theorem 1. Algorithm 1 is monotone, and it gives a 22-approximation of the optimal social surplus.

Proof sketch.

It is clear that when the weights are fixed, one can only improve its ranking in the ordering (in step 2) if its bid amount increases, thus monotone.

We now establish the 22-approximation guarantee. Given an instance <b,w,W>\left< \vec{b}, \vec{w}, W\right> of the problem, let OPTOPT denote the largest surplus over all feasible solutions.

We first consider two cases where 22-approximation easily follows.

  1. If b=max(b)(1/2)OPTb^* = \max(\vec{b}) \geq (1/2) \cdot OPT, then 22-approximation (see step 44 of the algorithm).

Recall that NN' is the set of bidders selected in step 22. Let W(N)W(N') be the sum of weights over bidders in NN'. Note that bidders in NN' together offers the highest bang-per-buck for any selection of bidders with a total capacity W(N)W(N').

  1. If W(N)(1/2)WW(N') \geq (1/2) \cdot W, then for any solution, even in the case where the full capacity WW is used, the resulting social surplus is at most 2obj(N)2 \cdot obj(N'). Thus 22-approximation.

Now, consider the remaining case where b<(1/2)OPTb^* < (1/2) \cdot OPT, and W(N)<(1/2)WW(N') < (1/2) \cdot W.

Since no bidders in NNN \setminus N' can be added to NN' without violating the capacity constraint (w.l.o.g., we assume that NNN \setminus N' is non-empty, otherwise, the algorithm finds an optimal solution.), all bidders in NNN \setminus N' have capacity strictly larger than (1/2)W(1/2) \cdot W. This implies that, in any feasible solution, the number of bidders selected from the set NNN \setminus N' is at most one, where the rest (if any) of the selected bidders must from NN'. Thus, obj(N)obj(N') is at most off by the largest bid over bidders in NNN \setminus N', which is at most b<(1/2)OPTb^* < (1/2) \cdot OPT. This concludes the proof.

Remark

There is a well known PTAS for knapsack based on a simple re-valuation idea; its original form is not monotone.

Consider an NP-hard optimization problem, check if the state-of-the-art approximation algorithm directly leads to a DSIC mechanism and, if not, tweak it or design a new approximation algorithm that does, hopefully without degrading the approximation guarantee.

Not all NP-hard problem fit into the context of auction mechanism, right? Say, vertex cover, what is the notion of bid in this case?

Black-Box Reductions

Is there a natural single-parameter problem where the best poly-time approximation guarantee is strictly worsened when we have the monotone constraint on the algorithm?

This topic on black-box reductions (instead of doing case-by-case, we have a general reduction that transforms a non-monotone polynomial-time algorithm and into a monotone poly-time algorithm, without degrading the approximation ratio by too much.) is very interesting.

Revelation principle

The revelation principle says that, if there is a mechanism MM where each player has a dominant strategy, regardless of their private valuations, then there exists a mechanism MM' derived from MM which is DSIC.

Proof sketch.

Note. A dominant strategy of ii can be viewed as the output of a function si(vi)s_i(v_i), where the input is the valuation (e.g., bidding twice the valuation).

Given the set of nn bidders with private valuation viv_i, let si(vi)s_i(v_i) be the dominant strategy of ii under the mechanism MM, i[n]i \in [n]. That is, under MM, si(vi)s_i(v_i) is the bid that viv_i would submit using its dominant strategy.

Consider another mechanism MM', where MM' takes a bid bib_i from each ii, submit b=(s1(b1),...,sn(bn))b = (s_1(b_1), ..., s_n(b_n)) to MM, and let MM do the rest.

Remark

Note that the above is an existence proof (i.e., there exists a MM' that somehow knows sis_i for all players).