Inductive Bias
Inductive bias is the set of assumptions that a learning algorithm uses to predict outputs for inputs it has never encountered. Without inductive bias, no learning is possible: an algorithm that makes no assumptions about the structure of the target function can justify any prediction whatsoever. The bias is not a flaw to be eliminated but a design choice that determines which problems the system can solve efficiently and which it will fail at entirely. Different architectures encode different biases — locality in CNNs, sequential dependence in RNNs, pairwise interactions in transformers — and the match between bias and problem structure is the primary determinant of success. The field's chronic under-theorization of inductive bias is why no free lunch theorems keep surprising practitioners who assumed their favorite algorithm was universally powerful.