- Published on
Notes on Roughgarden AGT Lecture - 4
Knapsack setting
Single-parameter environment.
bidders, each with a weight and a valuation .
The seller with a capacity .
The feasible allocation set is the set of - vector where for all .
An example of this setting is TV ads bidding for a (long) commercial break. Each ads has it own duration (weight ), and is the total length of the break.
A mechanism for Knapsack auction
Recall that an ideal mechanism satisfies the following:
DSIC: Bidding valuation is a dominant strategy (and IR: utility is never negative).
Maximized social surplus:
The auction runs in polynomial time.
Allocation rule
First, assuming (1) holds, we want an allocation rule that maximizes social surplus. Observe that, any allocation rule
solves the underlying knapsack problem. Thus, no surplus-maximizing allocation rule is efficient (unless P = NP). It follows that an ideal mechanism that satisfies all three properties simultaneously does not exist, unless P = NP.
Payment rule
Note that the allocation rule above is monotone. By Myerson's Lemma, (in our single-parameter setting) there is a payment rule that implements it. Fix a player and , it is not hard to see that is a step function w.r.t. , with only one break point , moving the allocation from to . Thus, if , the payment is , otherwise, the payment is .
Note that finding such a given and involves solving another instance of knapsack. Thus, the payment function itself cannot be computed in polynomial time, unless P = NP.
Note. Note that Myerson's Lemma does not concerns running time nor surplus guarantee. It only says that, if an allocation rule is monotone, there is a payment rule that implements it (i.e., DSIC).
Approximated solutions
The dominant paradigm in algorithmic mechanism design is to relax the second constraint (optimal surplus) as little as possible, subject to the first (DSIC) and third (polynomial-time) constraints. For single parameter environments, Myerson’s Lemma implies that the following goal is equivalent: design a polynomial-time and monotone allocation rule that comes as close as possible to maximizing the social surplus.
Question. [10:26AM - 04/30/2025]
Why the following goal:
Design a polynomial-time and monotone allocation rule that comes as close as possible to maximizing the social surplus.
is equivalent to finding a mechanism that is DISC, efficient, and approximate the surplus.
My main question is the efficiency: why a poly-time allocation rule implies its implementing payment rule is also poly-time? For instance, consider a monotone piece-wise step allocation function that is efficient, it is possible that the breakpoints are hard to compute?
My current intuition is that if the allocation rule is efficient (suppose is a piece-wise step function), to find all breakpoints that come before a bid (fixing ()), one can do binary search over .
Overall, the problem becomes designing an allocation algorithm that approximate the optimal surplus, where the algorithm should be monotone w.r.t each bid (while other bids are fixed).
Myerson's Lemma implies that algorithmic mechanism design boils down to algorithm design in an oddly restricted (via monotonicity) “computational model” — the entire game-theoretic aspect of the design goal is neatly compiled into a relatively intuitive extra constraint on the allocation rule.
It should be clear what the holy grail in algorithmic mechanism design is: for as many NP-hard problems of interest as possible, to match the best-known approximation guarantee for (not necessarily monotone) approximate surplus maximization algorithms — or even the best-possible approximation guarantee, subject to . That is, we would like the DSIC/monotone constraint to cause no additional surplus loss, beyond the loss we already have to suffer due to the polynomial-time constraint.
Remark. An allocation rule that maximizes social surplus must be monotone, thus implemented by a payment rule.
An allocation rule that approximate social surplus
Consider the following allocation algorithm:
Algorithm 1. Given a bid vector , weight vector , and total capacity .
Sort bidders by (i.e., bang-per-buck) in non-descending order. Let be the new order of bidders.
Select bidders (i.e. set ) in this order until violating
Note that we should skip bidders if its weight is too high and move to the next succeeding bidder under .
Let be the resulting set of selected bidders.
Let denote the surplus in (i.e., the sum of the bids over bidders in .)
Return , where is the highest bid in .
I think that the argument of the approximation guarantee (presented below) is not exactly the same as the one in the Prof. Roughgarden's lecture (where in our argument, we use the fact that the creation of is maximal. TBH, I didn't really read the proof in the lecture note. As this -approximation is quite standard, any proof works), but the ideas should be almost the same.
Theorem 1. Algorithm 1 is monotone, and it gives a -approximation of the optimal social surplus.
Proof sketch.
It is clear that when the weights are fixed, one can only improve its ranking in the ordering (in step 2) if its bid amount increases, thus monotone.
We now establish the -approximation guarantee. Given an instance of the problem, let denote the largest surplus over all feasible solutions.
We first consider two cases where -approximation easily follows.
- If , then -approximation (see step of the algorithm).
Recall that is the set of bidders selected in step . Let be the sum of weights over bidders in . Note that bidders in together offers the highest bang-per-buck for any selection of bidders with a total capacity .
- If , then for any solution, even in the case where the full capacity is used, the resulting social surplus is at most . Thus -approximation.
Now, consider the remaining case where , and .
Since no bidders in can be added to without violating the capacity constraint (w.l.o.g., we assume that is non-empty, otherwise, the algorithm finds an optimal solution.), all bidders in have capacity strictly larger than . This implies that, in any feasible solution, the number of bidders selected from the set is at most one, where the rest (if any) of the selected bidders must from . Thus, is at most off by the largest bid over bidders in , which is at most . This concludes the proof. ∎
Remark
There is a well known PTAS for knapsack based on a simple re-valuation idea; its original form is not monotone.
Consider an NP-hard optimization problem, check if the state-of-the-art approximation algorithm directly leads to a DSIC mechanism and, if not, tweak it or design a new approximation algorithm that does, hopefully without degrading the approximation guarantee.
Not all NP-hard problem fit into the context of auction mechanism, right? Say, vertex cover, what is the notion of bid in this case?
Black-Box Reductions
Is there a natural single-parameter problem where the best poly-time approximation guarantee is strictly worsened when we have the monotone constraint on the algorithm?
This topic on black-box reductions (instead of doing case-by-case, we have a general reduction that transforms a non-monotone polynomial-time algorithm and into a monotone poly-time algorithm, without degrading the approximation ratio by too much.) is very interesting.
Revelation principle
The revelation principle says that, if there is a mechanism where each player has a dominant strategy, regardless of their private valuations, then there exists a mechanism derived from which is DSIC.
Proof sketch.
Note. A dominant strategy of can be viewed as the output of a function , where the input is the valuation (e.g., bidding twice the valuation).
Given the set of bidders with private valuation , let be the dominant strategy of under the mechanism , . That is, under , is the bid that would submit using its dominant strategy.
Consider another mechanism , where takes a bid from each , submit to , and let do the rest. ∎
Remark
Note that the above is an existence proof (i.e., there exists a that somehow knows for all players).