Truth Index Encyclopedia

ASSUMPTIONS PEOPLE HOLD

How Unexamined Beliefs Shape Reality Construction
← Back
Visual Demonstration: The Hollow Face Illusion
CONVEX (Normal Face) Light from left → Nose projects forward CONCAVE (Hollow Face) Same light → Brain assumes nose still projects forward

The Hollow Face Illusion reveals how deeply assumptions govern perception. Even when viewing a concave (hollow) face mask from the inside, the brain automatically assumes the nose projects outward—because faces always work that way. This assumption is so powerful that it overrides contradictory sensory evidence, demonstrating how unexamined beliefs shape what we "see" as reality.

Human cognition operates on a vast network of assumptions—unexamined beliefs that guide perception, judgment, and action. These cognitive shortcuts allow rapid processing of complex information, but they also systematically distort understanding when reality deviates from expected patterns. This subsection examines six fundamental categories of assumptions that shape human thought, revealing how these implicit beliefs create predictable errors in reasoning about the world, other people, and social systems.

1. Default World-State Assumptions

The human mind constructs models of how the world "normally" operates, then treats these models as baseline reality. When economists developed rational choice theory in the mid-20th century, they assumed humans naturally act as utility-maximizing agents who make optimal decisions when given complete information. This assumption—that people are fundamentally rational actors—shaped economic policy for decades despite mounting evidence of systematic deviations from rational behavior (Kahneman, 2011).

Research in behavioral economics has documented how people consistently violate the assumptions of rational actor theory. Kahneman and Tversky's (1979) prospect theory demonstrated that individuals evaluate potential losses and gains asymmetrically, exhibiting loss aversion that contradicts expected utility theory. People routinely make choices influenced by how options are framed rather than their objective value, accept sunk costs as justifications for continued investment, and discount future rewards hyperbolically rather than exponentially (Thaler, 2015).

Neurobiological research reveals why these deviations occur. The brain's reward and decision-making systems evolved to solve ancestral problems, not to maximize abstract utility. The ventromedial prefrontal cortex, which integrates emotional and cognitive information during decision-making, shows activation patterns consistent with heuristic-based rather than optimal choice strategies (Camerer, Loewenstein, & Prelec, 2005). These default assumptions about rationality persist not because they accurately describe human behavior, but because they simplify an otherwise impossibly complex decision space.

2. Human Behavior Assumptions

People automatically construct explanations for others' behavior, but these explanations systematically over-emphasize internal dispositions while under-weighting situational factors. This pattern, termed the fundamental attribution error by Ross (1977), represents one of the most robust findings in social psychology. When observing another person's actions, individuals tend to attribute behavior to stable personality traits rather than to temporary circumstances or environmental pressures.

Classic research by Jones and Harris (1967) demonstrated this bias experimentally. Participants read essays either supporting or opposing Fidel Castro, knowing the essayist's position had been randomly assigned. Despite this knowledge, participants still rated essay writers as having attitudes consistent with their essays' positions. Even when explicitly told that situational factors determined behavior, observers continued attributing actions to the actor's disposition (Gilbert & Malone, 1995).

The neural basis of attribution reflects the brain's theory of mind system. The temporoparietal junction and medial prefrontal cortex, regions implicated in mental state attribution, show differential activation when people make dispositional versus situational attributions (Moran, Jolly, & Mitchell, 2012). Importantly, this system appears to default to dispositional inference unless corrective situational information is deliberately processed—a cognitively demanding task that often fails under time pressure or cognitive load (Gilbert, Pelham, & Krull, 1988).

Cultural factors modulate but do not eliminate this bias. While Western individualistic cultures show stronger dispositional attribution than East Asian collectivist cultures, both patterns reflect assumptions about what causes behavior. The difference lies in which default model—individual agency or social context—takes precedence (Morris & Peng, 1994).

3. Stability & Continuity Assumptions

The human cognitive system strongly biases toward perceiving continuity and stability, even in the face of change. This manifests most clearly in status quo bias—the tendency to prefer current states of affairs over alternative options, independent of their objective merits. Samuelson and Zeckhauser (1988) found that decision-makers systematically favor existing conditions over changes, with this preference strengthening as the number of alternatives increases.

Neurobiological evidence links this bias to the brain's prediction error system. The basal ganglia and anterior cingulate cortex respond differentially to expected versus unexpected outcomes, with unexpected changes generating stronger neural responses than confirmations of existing patterns (Dunne & O'Doherty, 2013). This architecture creates an implicit penalty for deviation from established baselines, effectively assuming that current states represent "correct" or "natural" conditions.

The assumption of stability extends beyond preference to perception. When tracking objects or events over time, people automatically fill gaps with continuity assumptions. If an object briefly disappears behind an occluder, observers assume it continues existing in the same state rather than transforming or ceasing to exist (Scholl & Leslie, 1999). These assumptions serve useful purposes in stable environments but generate systematic errors when confronting genuine discontinuities or regime changes.

4. Context Transfer Assumptions

People automatically generalize knowledge from familiar contexts to novel situations, often failing to recognize when contextual differences invalidate such transfers. This challenge—termed the problem of domain generalization in machine learning—also afflicts human cognition. Austerweil, Sanborn, and Griffiths (2019) demonstrated that people learn not just specific facts but also abstract principles about how to generalize, with these meta-level assumptions strongly influenced by prior experience.

The difficulty lies in determining which aspects of knowledge should transfer across contexts. When people learn a concept in one domain—for example, that taxonomic similarity predicts shared properties in biology—they may inappropriately apply this principle to domains where different organizational structures prevail (Heit & Rubinstein, 1994). The brain must balance between overgeneralizing (applying learning too broadly) and undergeneralizing (failing to recognize relevant similarities across contexts).

Neurocognitive research on semantic cognition reveals how the anterior temporal lobe acts as a hub that integrates modality-specific information into generalizable concepts (Ralph, Jefferies, Patterson, & Rogers, 2017). This system enables abstraction but also creates vulnerability: once a generalization pattern is established, it tends to be applied automatically unless explicit contextual cues signal its inappropriateness. The assumption that knowledge structures transfer across contexts is computationally efficient but produces systematic errors in novel environments.

5. Social & Consensus Assumptions

The human drive for social cohesion creates powerful assumptions about group consensus and collective wisdom. Janis (1972) identified groupthink as a mode of thinking where group members prioritize harmony and consensus over critical evaluation of alternatives. In analyzing historical policy failures—including the Bay of Pigs invasion and escalation of the Vietnam War—Janis found that cohesive groups systematically suppressed dissent, stereotyped outsiders, and maintained illusions of unanimity.

The psychological mechanisms underlying groupthink reflect deep-seated social cognitive biases. Conformity research by Asch (1951) demonstrated that approximately 75% of participants conformed to obviously incorrect group judgments at least once, with conformity pressure intensifying in cohesive groups facing ambiguous situations. This conformity stems not merely from conscious compliance but from genuine perception shifts: when group members agree on a position, individuals often come to actually perceive it as more valid (Moscovici & Personnaz, 1980).

Brain imaging studies reveal that social conformity modulates activity in perceptual and decision-making regions. When individuals receive feedback that contradicts group consensus, increased activation appears in the rostral cingulate zone and ventral striatum—regions associated with error processing and social reward (Klucharev, Hytönen, Rijpkema, Smidts, & Fernández, 2009). This neural architecture creates an automatic assumption that group consensus reflects truth, with deviation from consensus experienced as aversive even when the group is demonstrably wrong.

The assumption that social consensus indicates correctness becomes particularly problematic when combined with selective information processing. Groups experiencing groupthink engage in biased search for information that confirms their preferred position while dismissing contradictory evidence (Hart, 1998). The circular logic—"we agree, therefore we must be right, therefore we should continue agreeing"—proves remarkably resistant to correction once established.

6. Authority & Legitimacy Assumptions

Perhaps no psychological research has more dramatically revealed the power of authority assumptions than Milgram's (1974) obedience experiments. When instructed by a scientist in a laboratory coat to administer increasingly severe electric shocks to another person, 65% of participants continued to the maximum 450-volt level despite the "victim's" screams of pain. Milgram's systematic variations demonstrated that obedience stemmed not from sadism or unusual personality traits, but from assumptions about legitimate authority.

Meta-analysis of Milgram's 21 experimental conditions revealed which factors most strongly predicted obedience. Contrary to popular interpretation, the prestige of the setting (Yale University versus a commercial office) had minimal effect. Instead, three factors proved crucial: the experimenter's directiveness (clear commands increased obedience), physical proximity (nearby authority figures increased compliance), and the presence of dissenting peers (other participants refusing increased disobedience) (Burger, 2009; Haslam, Loughnan, & Perry, 2014).

These findings reveal an automatic assumption about authority: that legitimate authorities have the right to command and that obedience represents appropriate behavior. Participants in Milgram's studies entered what he termed an "agentic state"—a psychological condition in which individuals view themselves as instruments carrying out another person's wishes rather than as autonomous moral agents responsible for their own actions (Milgram, 1974). The assumption that authority confers legitimacy proved so powerful that it overrode participants' own moral objections to harming an innocent person.

Recent replications confirm the persistence of these patterns. Burger's (2009) partial replication found 70% of participants willing to continue beyond the 150-volt level (the point where the "learner" first protests), while a French television replication found 81% compliance—exceeding Milgram's original rates (Beauvois, Courbet, & Oberlé, 2012). Neuroscientific investigation using virtual reality versions of the paradigm reveals that obedient actions reduce empathic neural responses, suggesting that the agentic state actively suppresses moral processing systems (Cheetham, Pedroni, Antley, Slater, & Jäncke, 2009).

The assumption of authority legitimacy extends beyond experimental settings into everyday institutional life. People routinely accept that doctors, police officers, judges, and scientific experts possess special knowledge and decision-making rights in their domains. While often functionally appropriate, this assumption becomes dangerous when it prevents critical evaluation of authority directives, particularly when those directives conflict with ethical principles or empirical evidence.

Implications for Understanding Truth

These six categories of assumptions—about world states, human behavior, stability, context transfer, social consensus, and authority—form the invisible architecture of human thought. They enable rapid cognitive processing by providing default interpretations of ambiguous information, but they also create systematic blind spots. When reality deviates from these implicit models, people often distort perception to maintain consistency with their assumptions rather than updating the assumptions to match observed evidence.

Understanding these assumption patterns is crucial for evaluating truth claims. Assertions that align with default assumptions receive less critical scrutiny than those that contradict them, independent of their actual validity. Recognition of one's own assumption patterns—and the willingness to examine them explicitly when evaluating new information—represents a fundamental skill in truth-seeking. The human mind's assumption-driven architecture cannot be eliminated, but it can be recognized and deliberately counteracted through sustained intellectual effort.

Supporting Case Studies

The following documented cases provide real-world examples of these assumption patterns in operation:

← Back

References

Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgments. In H. Guetzkow (Ed.), Groups, leadership and men (pp. 177-190). Carnegie Press.
Austerweil, J. L., Sanborn, S., & Griffiths, T. L. (2019). Learning how to generalize. Cognitive Science, 43(8), e12777. https://doi.org/10.1111/cogs.12777
Beauvois, J.-L., Courbet, D., & Oberlé, D. (2012). The prescriptive power of the television host: A transposition of Milgram's obedience paradigm to the context of TV game show. European Review of Applied Psychology, 62(3), 111-119. https://doi.org/10.1016/j.erap.2012.02.001
Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64(1), 1-11. https://doi.org/10.1037/a0010932
Camerer, C., Loewenstein, G., & Prelec, D. (2005). Neuroeconomics: How neuroscience can inform economics. Journal of Economic Literature, 43(1), 9-64. https://doi.org/10.1257/0022051053737843
Cheetham, M., Pedroni, A. F., Antley, A., Slater, M., & Jäncke, L. (2009). Virtual milgram: Empathic concern or personal distress? Evidence from functional MRI and dispositional measures. Frontiers in Human Neuroscience, 3, 29. https://doi.org/10.3389/neuro.09.029.2009
Dunne, S., & O'Doherty, J. P. (2013). Insights from the application of computational neuroimaging to social neuroscience. Current Opinion in Neurobiology, 23(3), 387-392. https://doi.org/10.1016/j.conb.2013.02.007
Gilbert, D. T., & Malone, P. S. (1995). The correspondence bias. Psychological Bulletin, 117(1), 21-38. https://doi.org/10.1037/0033-2909.117.1.21
Gilbert, D. T., Pelham, B. W., & Krull, D. S. (1988). On cognitive busyness: When person perceivers meet persons perceived. Journal of Personality and Social Psychology, 54(5), 733-740. https://doi.org/10.1037/0022-3514.54.5.733
Hart, P. T. (1998). Preventing groupthink revisited: Evaluating and reforming groups in government. Organizational Behavior and Human Decision Processes, 73(2-3), 306-326. https://doi.org/10.1006/obhd.1998.2762
Haslam, S. A., Loughnan, S., & Perry, G. (2014). Meta-Milgram: An empirical synthesis of the obedience experiments. PLoS ONE, 9(4), e93927. https://doi.org/10.1371/journal.pone.0093927
Heit, E., & Rubinstein, J. (1994). Similarity and property effects in inductive reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(2), 411-422. https://doi.org/10.1037/0278-7393.20.2.411
Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin.
Jones, E. E., & Harris, V. A. (1967). The attribution of attitudes. Journal of Experimental Social Psychology, 3(1), 1-24. https://doi.org/10.1016/0022-1031(67)90034-0
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-291. https://doi.org/10.2307/1914185
Klucharev, V., Hytönen, K., Rijpkema, M., Smidts, A., & Fernández, G. (2009). Reinforcement learning signal predicts social conformity. Neuron, 61(1), 140-151. https://doi.org/10.1016/j.neuron.2008.11.027
Milgram, S. (1974). Obedience to authority: An experimental view. Harper & Row.
Moran, J. M., Jolly, E., & Mitchell, J. P. (2012). Social-cognitive deficits in normal aging. Journal of Neuroscience, 32(16), 5553-5561. https://doi.org/10.1523/JNEUROSCI.5511-11.2012
Morris, M. W., & Peng, K. (1994). Culture and cause: American and Chinese attributions for social and physical events. Journal of Personality and Social Psychology, 67(6), 949-971. https://doi.org/10.1037/0022-3514.67.6.949
Moscovici, S., & Personnaz, B. (1980). Studies in social influence: V. Minority influence and conversion behavior in a perceptual task. Journal of Experimental Social Psychology, 16(3), 270-282. https://doi.org/10.1016/0022-1031(80)90070-0
Ralph, M. A. L., Jefferies, E., Patterson, K., & Rogers, T. T. (2017). The neural and computational bases of semantic cognition. Nature Reviews Neuroscience, 18(1), 42-55. https://doi.org/10.1038/nrn.2016.150
Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 10, pp. 173-220). Academic Press. https://doi.org/10.1016/S0065-2601(08)60357-3
Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7-59. https://doi.org/10.1007/BF00055564
Scholl, B. J., & Leslie, A. M. (1999). Modularity, development and 'theory of mind'. Mind & Language, 14(1), 131-153. https://doi.org/10.1111/1468-0017.00106
Thaler, R. H. (2015). Misbehaving: The making of behavioral economics. W. W. Norton & Company.