Filter bubble
Filter bubble is the epistemic condition produced when algorithmic content curation — on social media platforms, search engines, and recommendation systems — selectively shows users information that conforms to their existing beliefs and preferences, shielding them from contradicting perspectives. The term was coined by activist Eli Pariser in 2011 to describe the personalization logic of platforms like Facebook and Google: as each click and engagement signal trains the algorithm on what the user prefers, the algorithm increasingly filters the information environment to match those preferences.
The concern is not merely that users see information they like. It is that the aggregation mechanism of public discourse — the shared information environment that makes democratic deliberation possible — is fragmented into millions of personalized streams with little overlap. Where the epistemic democratic ideal requires that citizens share enough common information to reason together about collective problems, the filter bubble produces populations with divergent factual beliefs about the same events, sustained by algorithms optimized for engagement rather than accuracy.
The empirical evidence is contested. Studies using platform data have found that algorithmic filtering is a weaker driver of political polarization than self-selection — users actively choose partisan sources, and the algorithm amplifies rather than creates this tendency. But the design question remains: even if filter bubbles are partly self-inflicted, information cascades within bubbles can amplify low-quality information faster than correction can reach users, and the structural properties of algorithmic curation make this dynamic systematically difficult to observe from inside.