The Hollow Face Illusion reveals how deeply assumptions govern perception. Even when viewing a concave (hollow) face mask from the inside, the brain automatically assumes the nose projects outward—because faces always work that way. This assumption is so powerful that it overrides contradictory sensory evidence, demonstrating how unexamined beliefs shape what we "see" as reality.
The human mind constructs models of how the world "normally" operates, then treats these models as baseline reality. When economists developed rational choice theory in the mid-20th century, they assumed humans naturally act as utility-maximizing agents who make optimal decisions when given complete information. This assumption—that people are fundamentally rational actors—shaped economic policy for decades despite mounting evidence of systematic deviations from rational behavior (Kahneman, 2011).
Research in behavioral economics has documented how people consistently violate the assumptions of rational actor theory. Kahneman and Tversky's (1979) prospect theory demonstrated that individuals evaluate potential losses and gains asymmetrically, exhibiting loss aversion that contradicts expected utility theory. People routinely make choices influenced by how options are framed rather than their objective value, accept sunk costs as justifications for continued investment, and discount future rewards hyperbolically rather than exponentially (Thaler, 2015).
Neurobiological research reveals why these deviations occur. The brain's reward and decision-making systems evolved to solve ancestral problems, not to maximize abstract utility. The ventromedial prefrontal cortex, which integrates emotional and cognitive information during decision-making, shows activation patterns consistent with heuristic-based rather than optimal choice strategies (Camerer, Loewenstein, & Prelec, 2005). These default assumptions about rationality persist not because they accurately describe human behavior, but because they simplify an otherwise impossibly complex decision space.
People automatically construct explanations for others' behavior, but these explanations systematically over-emphasize internal dispositions while under-weighting situational factors. This pattern, termed the fundamental attribution error by Ross (1977), represents one of the most robust findings in social psychology. When observing another person's actions, individuals tend to attribute behavior to stable personality traits rather than to temporary circumstances or environmental pressures.
Classic research by Jones and Harris (1967) demonstrated this bias experimentally. Participants read essays either supporting or opposing Fidel Castro, knowing the essayist's position had been randomly assigned. Despite this knowledge, participants still rated essay writers as having attitudes consistent with their essays' positions. Even when explicitly told that situational factors determined behavior, observers continued attributing actions to the actor's disposition (Gilbert & Malone, 1995).
The neural basis of attribution reflects the brain's theory of mind system. The temporoparietal junction and medial prefrontal cortex, regions implicated in mental state attribution, show differential activation when people make dispositional versus situational attributions (Moran, Jolly, & Mitchell, 2012). Importantly, this system appears to default to dispositional inference unless corrective situational information is deliberately processed—a cognitively demanding task that often fails under time pressure or cognitive load (Gilbert, Pelham, & Krull, 1988).
Cultural factors modulate but do not eliminate this bias. While Western individualistic cultures show stronger dispositional attribution than East Asian collectivist cultures, both patterns reflect assumptions about what causes behavior. The difference lies in which default model—individual agency or social context—takes precedence (Morris & Peng, 1994).
The human cognitive system strongly biases toward perceiving continuity and stability, even in the face of change. This manifests most clearly in status quo bias—the tendency to prefer current states of affairs over alternative options, independent of their objective merits. Samuelson and Zeckhauser (1988) found that decision-makers systematically favor existing conditions over changes, with this preference strengthening as the number of alternatives increases.
Neurobiological evidence links this bias to the brain's prediction error system. The basal ganglia and anterior cingulate cortex respond differentially to expected versus unexpected outcomes, with unexpected changes generating stronger neural responses than confirmations of existing patterns (Dunne & O'Doherty, 2013). This architecture creates an implicit penalty for deviation from established baselines, effectively assuming that current states represent "correct" or "natural" conditions.
The assumption of stability extends beyond preference to perception. When tracking objects or events over time, people automatically fill gaps with continuity assumptions. If an object briefly disappears behind an occluder, observers assume it continues existing in the same state rather than transforming or ceasing to exist (Scholl & Leslie, 1999). These assumptions serve useful purposes in stable environments but generate systematic errors when confronting genuine discontinuities or regime changes.
People automatically generalize knowledge from familiar contexts to novel situations, often failing to recognize when contextual differences invalidate such transfers. This challenge—termed the problem of domain generalization in machine learning—also afflicts human cognition. Austerweil, Sanborn, and Griffiths (2019) demonstrated that people learn not just specific facts but also abstract principles about how to generalize, with these meta-level assumptions strongly influenced by prior experience.
The difficulty lies in determining which aspects of knowledge should transfer across contexts. When people learn a concept in one domain—for example, that taxonomic similarity predicts shared properties in biology—they may inappropriately apply this principle to domains where different organizational structures prevail (Heit & Rubinstein, 1994). The brain must balance between overgeneralizing (applying learning too broadly) and undergeneralizing (failing to recognize relevant similarities across contexts).
Neurocognitive research on semantic cognition reveals how the anterior temporal lobe acts as a hub that integrates modality-specific information into generalizable concepts (Ralph, Jefferies, Patterson, & Rogers, 2017). This system enables abstraction but also creates vulnerability: once a generalization pattern is established, it tends to be applied automatically unless explicit contextual cues signal its inappropriateness. The assumption that knowledge structures transfer across contexts is computationally efficient but produces systematic errors in novel environments.
The human drive for social cohesion creates powerful assumptions about group consensus and collective wisdom. Janis (1972) identified groupthink as a mode of thinking where group members prioritize harmony and consensus over critical evaluation of alternatives. In analyzing historical policy failures—including the Bay of Pigs invasion and escalation of the Vietnam War—Janis found that cohesive groups systematically suppressed dissent, stereotyped outsiders, and maintained illusions of unanimity.
The psychological mechanisms underlying groupthink reflect deep-seated social cognitive biases. Conformity research by Asch (1951) demonstrated that approximately 75% of participants conformed to obviously incorrect group judgments at least once, with conformity pressure intensifying in cohesive groups facing ambiguous situations. This conformity stems not merely from conscious compliance but from genuine perception shifts: when group members agree on a position, individuals often come to actually perceive it as more valid (Moscovici & Personnaz, 1980).
Brain imaging studies reveal that social conformity modulates activity in perceptual and decision-making regions. When individuals receive feedback that contradicts group consensus, increased activation appears in the rostral cingulate zone and ventral striatum—regions associated with error processing and social reward (Klucharev, Hytönen, Rijpkema, Smidts, & Fernández, 2009). This neural architecture creates an automatic assumption that group consensus reflects truth, with deviation from consensus experienced as aversive even when the group is demonstrably wrong.
The assumption that social consensus indicates correctness becomes particularly problematic when combined with selective information processing. Groups experiencing groupthink engage in biased search for information that confirms their preferred position while dismissing contradictory evidence (Hart, 1998). The circular logic—"we agree, therefore we must be right, therefore we should continue agreeing"—proves remarkably resistant to correction once established.
Perhaps no psychological research has more dramatically revealed the power of authority assumptions than Milgram's (1974) obedience experiments. When instructed by a scientist in a laboratory coat to administer increasingly severe electric shocks to another person, 65% of participants continued to the maximum 450-volt level despite the "victim's" screams of pain. Milgram's systematic variations demonstrated that obedience stemmed not from sadism or unusual personality traits, but from assumptions about legitimate authority.
Meta-analysis of Milgram's 21 experimental conditions revealed which factors most strongly predicted obedience. Contrary to popular interpretation, the prestige of the setting (Yale University versus a commercial office) had minimal effect. Instead, three factors proved crucial: the experimenter's directiveness (clear commands increased obedience), physical proximity (nearby authority figures increased compliance), and the presence of dissenting peers (other participants refusing increased disobedience) (Burger, 2009; Haslam, Loughnan, & Perry, 2014).
These findings reveal an automatic assumption about authority: that legitimate authorities have the right to command and that obedience represents appropriate behavior. Participants in Milgram's studies entered what he termed an "agentic state"—a psychological condition in which individuals view themselves as instruments carrying out another person's wishes rather than as autonomous moral agents responsible for their own actions (Milgram, 1974). The assumption that authority confers legitimacy proved so powerful that it overrode participants' own moral objections to harming an innocent person.
Recent replications confirm the persistence of these patterns. Burger's (2009) partial replication found 70% of participants willing to continue beyond the 150-volt level (the point where the "learner" first protests), while a French television replication found 81% compliance—exceeding Milgram's original rates (Beauvois, Courbet, & Oberlé, 2012). Neuroscientific investigation using virtual reality versions of the paradigm reveals that obedient actions reduce empathic neural responses, suggesting that the agentic state actively suppresses moral processing systems (Cheetham, Pedroni, Antley, Slater, & Jäncke, 2009).
The assumption of authority legitimacy extends beyond experimental settings into everyday institutional life. People routinely accept that doctors, police officers, judges, and scientific experts possess special knowledge and decision-making rights in their domains. While often functionally appropriate, this assumption becomes dangerous when it prevents critical evaluation of authority directives, particularly when those directives conflict with ethical principles or empirical evidence.
These six categories of assumptions—about world states, human behavior, stability, context transfer, social consensus, and authority—form the invisible architecture of human thought. They enable rapid cognitive processing by providing default interpretations of ambiguous information, but they also create systematic blind spots. When reality deviates from these implicit models, people often distort perception to maintain consistency with their assumptions rather than updating the assumptions to match observed evidence.
Understanding these assumption patterns is crucial for evaluating truth claims. Assertions that align with default assumptions receive less critical scrutiny than those that contradict them, independent of their actual validity. Recognition of one's own assumption patterns—and the willingness to examine them explicitly when evaluating new information—represents a fundamental skill in truth-seeking. The human mind's assumption-driven architecture cannot be eliminated, but it can be recognized and deliberately counteracted through sustained intellectual effort.
The following documented cases provide real-world examples of these assumption patterns in operation: