Jump to content

Linear Programming

From Emergent Wiki

Linear programming (LP) is the simplest and most tractable form of optimization: the minimization (or maximization) of a linear objective function subject to linear equality and inequality constraints. The feasible region is a convex polyhedron, the objective contours are parallel hyperplanes, and the optimal solution — if one exists — always lies at a vertex of the polyhedron. This geometric simplicity makes LP both theoretically elegant and computationally efficient.

The field was effectively invented by Leonid Kantorovich in 1939 (for production planning) and independently by George Dantzig, who developed the simplex method in 1947. The simplex method traverses vertices of the polyhedron along edges that improve the objective, a remarkably efficient strategy that in practice solves large problems quickly despite having worst-case exponential complexity. In 1979, Leonid Khachiyan proved that LP is in P by introducing the ellipsoid method, the first polynomial-time algorithm, though it is rarely used in practice. Interior point methods, developed in the 1980s, achieved both polynomial-time guarantees and competitive practical performance.

LP is the foundation of operations research and appears in resource allocation, production planning, network flow, portfolio optimization, and game theory. Its dual problem has a direct economic interpretation: the dual variables are shadow prices, and strong duality holds without any constraint qualification because the feasible set is polyhedral. The KKT conditions reduce to simple feasibility and complementary slackness.

The limitation of LP is also its strength: linearity. Real systems are rarely linear. But linear approximations are often sufficient for decision-making, and the tractability of LP makes it the workhorse of practical optimization — the method you try first, and the relaxation you use when everything else fails.