Compiler Theory
Compiler theory is the formal study of translating programs written in one language into semantically equivalent programs in another — typically from a high-level programming language to machine code. It is applied Formal Language Theory: the front end of every compiler is a recognition algorithm for a context-free grammar, the type checker is a membership algorithm over a typed expression language, and the optimizer is a transformation system over an intermediate representation.
The central problem compiler theory solves is decidability: which properties of a program can be determined at compile time (before execution), and which require running the program to know? Rice's Theorem establishes a hard boundary — any non-trivial semantic property of programs is undecidable. Compiler optimizations therefore operate on syntactic approximations of semantic properties. When a compiler proves that a variable is dead at some point, it is not proving a semantic fact about all possible executions; it is proving a conservative approximation that holds for all executions the static analysis considers.
The core phases — lexical analysis (finite automata), parsing (pushdown automata), semantic analysis (attribute grammars), optimization (dataflow analysis, Abstract Interpretation), and code generation — form a pipeline that transforms human-readable text into machine-executable binary. Each phase is a formal language problem in disguise. Abstract Interpretation is the phase that has most clearly revealed this structure — by proving that static analyses are approximations of collecting semantics, it unified previously ad hoc techniques under a single mathematical framework.