# V takes as input an X, a bit-string P of size nc, and a bit-string R of size c*log(n), where n=log(|X|). \ |
# V takes as input an X, a bit-string P of size nc, and a bit-string R of size c*log(n), where n=|X|. \ |
If you know that the set cover problem is NP-hard, and you know there's a ln(n)-approximation algorithm for set cover, you might ask whether there's, say, a constant-factor approximation algorithm for set cover. The answer is, no, unless P=NP. In fact, there is no o(log n)-approximation algorithm, unless P=NP.
Hardness of approximation results refine traditional NP-hardness results. Typically, an NP-hard problem falls into one of the following categories (in order of increasing intractability):
It is useful to know which problems are easier to approximate and which are harder to approximate for several reasons. First, one often has a choice of what theoretical problem to use to formulate a particular real-life problem. It's probably better to choose a problem that allows a better approximation factor. Second, in searching for practical solutions to a particular problem that you have to solve, it is useful to know what to look for. If you know the problem has no polynomial-time algorithm that guarantees better than an nε approximation ratio, then you know you will have to resort to heuristics that take advantage of the structure of the particular instances you want to solve.
We sketch some of the fundamental ideas at the core of hardness of approximation results. We present a simple result showing that MAX CLIQUE is hard to approximate. The result we show is not the strongest that can be shown.
We will use the following theorem about probabilistically checkable proofs:
THM: NP ⊆ PCP(log n, 1)
(We won't cover the proof of this theorem, although we will describe a proof of a weaker version.)
What this theorem means is the following. For every language L in NP, there is a PCP-verifier V with the following properties:
This theorem underlies most of the known hardness of approximation results, much as Cook's theorem underlies most NP-hardness results. To illustrate how this theorem relates to hardness of approximation, we show the following theorem:
THM: Unless P=NP, MAX CLIQUE has no (1/2)-approximation algorithm.
PROOF SKETCH:
Let L be any NP-complete language. By the PCP theorem, L has a PCP verifier V as described above. Since this verifier V for L exists, the following algorithm A also exists. A is called a "gap producing reduction".
Suppose X∈ L. Then there exists a P such that V(X,P,R) accepts for, say, K of the possible bit-strings R. Then the graph G contains a clique of size K.
Conversely, suppose there is a clique of size K, say, {(R1,W1), (R2,W2), ..., (RK,WK)}. By the construction, it must be that the Ri's are distinct, Furthermore, all ways {W1, W2, ..., WK} are consistent. This means that there is at least one proof string P whose bits agree with each of these ways Wi. For this proof string P, V(X,P,R) accepts for at least k of the possible bit-strings R.
From the previous two paragraphs, it follows that the maximum clique size in G equals the maximum (over all proofs P) of the number of strings R causing V(X,P,R) to accept.
Let K= (the number of possible bit strings R) =2c log(n).
Thus, if X∈ L, then the maximum clique size is K. But if X∉ L, then the maximum clique size is less than K/2.
Now, suppose that there was a (1/2)-approximation algorithm for MAX CLIQUE. Under this assumption we will show P=NP.
We claim the following polynomial time algorithm would decide L (the NP-complete language):
It is easy to verify that (because of the properties of A) this algorithm would run in polynomial time and decide L. Since L is NP-complete, it would follow that P=NP.
QED