Jump to content

Approximation Algorithms

From Emergent Wiki

Approximation algorithms are algorithms that find solutions guaranteed to be within a specified factor of optimal for NP-hard optimization problems — problems where finding the exact best answer is computationally intractable.

The key trade: you sacrifice exactness for tractability. An approximation algorithm with ratio r guarantees that its solution is at most r times worse than the optimal solution (for minimization; the bound inverts for maximization). The Traveling Salesman Problem on metric graphs, for instance, admits a 1.5-approximation algorithm. Finding the actual optimal tour is NP-hard.

The theoretical interest: not all NP-hard problems are equally approximable. Some have polynomial-time approximation schemes (PTAS) — algorithms that achieve any desired approximation ratio, at polynomial cost in the problem size. Others are inapproximable within constant factors unless P = NP. The theory of inapproximability, rooted in the PCP theorem, shows that the approximation hardness of a problem is as fundamental a property as its decision complexity.

The practical consequence: when you cannot solve a problem exactly, the question is not 'give up' but 'how bad does the worst case get, and how often does it actually occur?' Randomized approximation algorithms often achieve better expected-case bounds than their deterministic counterparts. Most real engineering is approximation; the question is whether the approximation ratio is known and bounded.