Truth Index Encyclopedia

Adaptation, Learning & Decay

How systems adapt under pressure, learning occurs unevenly or fails, and decay emerges from feedback loops and environmental change

← Back

Visual Demonstration

Feedback → Adjustment → Drift → Mismatch Time Environmental Conditions System Adaptation lag growing gap mismatch feedback 1 feedback 2 feedback 3 Early Phase Delayed response Weak feedback signal Past conditions still yield acceptable results Middle Phase Noisy feedback Partial adjustments Environment continues to shift faster Late Phase Structural mismatch Previous learning now counterproductive Decay accelerates

Adaptation proceeds through delayed, noisy feedback while environmental conditions shift continuously. Systems adjust incrementally in response to lagged signals, creating persistent mismatch between system state and current environment. Early adaptations that improved fit under previous conditions become sources of rigidity as environments change. The gap between environmental demands and system capability widens not from failure to adapt but from adaptation outpaced by environmental drift, with feedback loops providing distorted signals that reinforce past-optimized behaviors now misaligned with present conditions.

Adaptation is response to pressure, not progress toward improvement. Learning occurs unevenly, often misattributed, and frequently degrades performance when conditions shift. Decay emerges not from neglect but from feedback loops that reinforce past success under changed conditions, from knowledge optimized for environments that no longer exist, and from habits that persist beyond their usefulness. Systems adapt, individuals learn, ventures evolve—but these processes operate through mechanisms that produce lag, distortion, and mismatch as reliably as they produce fit (Levinthal & March, 1993; March, 1991). The relationship between experience and capability proves far less direct than assumed.

Adaptation as Response to Pressure

Adaptation represents adjustment under selective pressure rather than improvement toward an optimum. Systems change when current configurations produce unacceptable outcomes, not when better alternatives become available (Hannan & Freeman, 1984). The trigger is negative feedback—declining performance, resource scarcity, competitive pressure—that makes continuation of existing patterns untenable. In the absence of such pressure, systems persist in current states regardless of whether superior alternatives exist (Cyert & March, 1963).

This pressure-driven dynamic creates path dependence. Adaptations address immediate problems using available resources and capabilities rather than optimal solutions. An organization facing revenue decline adjusts pricing, sales tactics, or cost structure based on what it can modify quickly with existing knowledge and resources, not what would maximize long-term fitness (Nelson & Winter, 1982). These pressure-driven adaptations often solve the immediate problem while creating constraints for future adaptation. The solution becomes embedded in routines, capabilities, and organizational memory, shaping what adaptations remain accessible in subsequent pressure episodes (Levitt & March, 1988).

Adaptation also occurs unevenly across system components. Some elements change readily under pressure while others resist modification due to interdependencies, sunk costs, or coordination requirements (Siggelkow, 2002). A venture might adapt its customer acquisition strategy quickly while its product architecture, organizational structure, or business model remain fixed. This uneven adaptation creates internal misalignment: parts of the system optimized for different conditions coexist, generating friction and performance degradation independent of external environmental change (Henderson & Clark, 1990).

Learning: Uneven, Delayed, and Misattributed

Learning represents updating of knowledge, beliefs, or capabilities based on experience. This updating proceeds through mechanisms that generate systematic distortions. Feedback arrives delayed: actions taken today produce observable outcomes weeks, months, or years later, long after causal connections become obscured by intervening events (Sterman, 1989). Attribution becomes problematic: success or failure results from combinations of actions, timing, external conditions, and chance, yet observers attribute outcomes to single salient factors (Ross, 1977). Learning occurs, but what is learned often bears limited relationship to underlying causal structure.

The unevenness of learning creates capability gaps. Individuals and organizations learn readily in domains with clear, rapid feedback and stable causal relationships. Performance improves through repetition when actions reliably produce interpretable results (Ericsson et al., 1993). In domains with delayed, noisy, or absent feedback, learning proceeds slowly or not at all despite extensive experience. A sales professional receives immediate feedback on pitch effectiveness and adapts quickly; a strategic planner receives delayed, ambiguous feedback on decisions whose outcomes depend on uncontrollable external factors and learns little despite years of experience (Einhorn & Hogarth, 1978).

Misattribution compounds these difficulties. Observers attribute success to recent salient actions rather than to earlier foundational work, luck, or favorable conditions. This misattribution shapes subsequent behavior: actors repeat actions that coincided with success even when those actions contributed minimally to outcomes (Denrell, 2003). Organizations institutionalize practices associated with past success, enshrining behaviors that may have been incidental or even counterproductive (Miller, 1993). The result is learning that reinforces causally irrelevant patterns while neglecting actual drivers of performance.

Feedback Loops: Clear, Noisy, Delayed, Absent

Feedback quality determines what can be learned from experience. Clear feedback provides unambiguous signals about action-outcome relationships: the action produced this result, adjusting the action would produce that result. Such feedback enables rapid learning when causal relationships remain stable (Kluger & DeNisi, 1996). Most entrepreneurial contexts provide feedback far noisier than this ideal. Outcomes result from multiple interacting factors; the same action produces different outcomes under different conditions; random variation overwhelms signal.

Noisy feedback creates opportunity for spurious learning. Observers detect patterns in random variation, attribute meaning to coincidence, and update beliefs based on illusory correlations (Lopes, 1982). An entrepreneur who succeeded despite strategic errors learns that those errors were valuable. A failed venture that employed sound practices learns to avoid those practices. The feedback was real—outcomes occurred—but the inference drawn from feedback bears little relationship to underlying causality. This spurious learning persists because subsequent experience is interpreted through the lens of initial (mis)learning, creating confirmation bias that reinforces incorrect beliefs (Nickerson, 1998).

Delayed feedback prevents learning entirely in many critical domains. Long cycle times between action and outcome mean that attribution becomes impossible: too many factors intervened, conditions changed, memory degraded. An investor makes portfolio allocation decisions whose outcomes become clear years later, after market conditions shifted, economic cycles turned, and dozens of other decisions were made. Learning what allocation strategy works proves infeasible because feedback arrives too late and too confounded to support causal inference (March, 2010). Experience accumulates without generating insight.

Absent feedback creates a different dynamic. Actions produce no observable signal at all, either because outcomes are unobservable or because actions prevented outcomes that would have occurred in their absence. Preventive measures, risk mitigation strategies, and insurance generate no positive feedback when successful—the disaster that didn't happen provides no signal (Meyer & Kunreuther, 2017). Learning from absence proves cognitively difficult: observers cannot update beliefs based on events that didn't occur. This asymmetry favors action over inaction even when inaction is optimal, because action occasionally produces observable positive outcomes while successful inaction is invisible.

Overfitting to Past Conditions

Learning optimizes performance for experienced conditions. This optimization creates vulnerability when conditions change. Skills, knowledge, and routines developed through extensive experience in one environment become liabilities in different environments (Leonard-Barton, 1992). The more thoroughly a system adapts to specific conditions, the more narrowly optimized and therefore fragile it becomes when those conditions shift.

Overfitting manifests as specialization. Organizations develop capabilities precisely tuned to current market conditions, customer preferences, or competitive landscapes (March, 1991). These capabilities confer advantage as long as conditions remain stable but become sources of rigidity when conditions change. A manufacturing firm optimized for high-volume standardized production struggles to adapt when markets shift toward customization. A sales organization specialized in relationship-based enterprise sales cannot easily pivot to product-led growth models. The optimization that drove past success becomes the constraint preventing future adaptation (Christensen, 1997).

Individual expertise exhibits similar dynamics. Experts develop highly specialized knowledge structures optimized for familiar problem types (Chi et al., 1981). This specialization enables rapid, accurate performance within the domain of expertise but creates brittleness outside it. Expert performance degrades sharply when problems deviate from familiar patterns, often underperforming novices who approach novel situations without preconceptions (Wiley, 1998). The learning that created expertise simultaneously created inflexibility.

Path Dependence and Habit Persistence

Early adaptations constrain later ones. Decisions made under initial conditions establish trajectories that prove difficult to escape even when conditions change (Arthur, 1989). A venture's first product shapes its technological infrastructure, customer base, and organizational capabilities, constraining what products can be developed subsequently. An individual's early career choices determine skill development, professional networks, and reputation, limiting what career paths remain accessible later. These path dependencies result not from optimal planning but from historical accident shaped by initial conditions (David, 1985).

Habits persist beyond their usefulness through automation and cognitive efficiency. Behaviors that once required conscious effort become automatic through repetition, reducing cognitive load and enabling multitasking (Wood & Neal, 2007). This automation creates efficiency when environments remain stable but becomes a source of error when environments change. The automated behavior continues to execute while conscious attention focuses elsewhere, producing actions misaligned with current conditions. Overriding automated responses requires cognitive effort that often exceeds available attention, particularly under time pressure or cognitive load (Verplanken & Wood, 2006).

Organizational routines exhibit even stronger persistence than individual habits. Routines embed knowledge across multiple individuals, systems, and processes (Feldman & Pentland, 2003). Changing routines requires coordinating changes across these distributed elements, creating coordination costs that often exceed the benefits of adaptation. Organizations continue executing routines optimized for past conditions because the cost and risk of changing them exceeds the cost of suboptimal performance under current conditions (Hannan et al., 2006). Routines designed for efficiency become sources of inertia.

Knowledge Accumulation Versus Skill Decay

Knowledge accumulates but skills decay through disuse. Factual knowledge—information about the world, relationships between concepts, procedural steps—remains accessible long after acquisition, degrading slowly if at all (Bahrick, 1984). Procedural skills—the ability to execute complex actions fluently—decay rapidly without practice. A surgeon who stops performing procedures loses manual dexterity within months; a programmer who stops coding loses fluency with syntax and patterns; a language speaker who stops practicing loses conversational ability (Arthur et al., 1998).

This asymmetry creates a knowledge-capability gap. Individuals retain knowledge about what should be done long after they lose the capability to execute it. An entrepreneur may know extensively about market strategies, operational best practices, and growth tactics while lacking the current skills to implement them effectively. The knowledge remains accessible but its utility depends on execution capabilities that have decayed. This gap often goes unrecognized: knowledge availability creates confidence in capability that actual performance cannot support (Kruger & Dunning, 1999).

Skill decay also proceeds unevenly across skill types. Perceptual and motor skills decay more rapidly than cognitive skills; complex integrated skills decay faster than simple discrete skills (Arthur et al., 1998). A sales professional who stops selling loses rapport-building instincts and negotiation timing before losing product knowledge. An analyst who stops modeling loses fluency with tools and techniques before losing conceptual understanding of analysis frameworks. The uneven decay creates capability profiles misaligned with task requirements: the skills that decay fastest are often the skills most critical for performance.

Organizational Memory and Forgetting

Organizations accumulate memory in documents, procedures, systems, and personnel. This memory provides continuity and enables learning to persist beyond individual tenure (Walsh & Ungson, 1991). However, organizational memory also generates rigidity. Established procedures encode past solutions to past problems, constraining current problem-solving to previously successful approaches. Knowledge embedded in systems shapes what information is attended to, what alternatives are considered, and what actions are feasible (Argote & Miron-Spektor, 2011).

Organizational forgetting occurs through personnel turnover, system changes, and procedural evolution. When experienced personnel leave, tacit knowledge—the informal know-how and contextual understanding that never got documented—leaves with them (Nonaka & Takeuchi, 1995). System upgrades and technology changes eliminate knowledge embedded in legacy systems. Procedures evolve through small incremental modifications until their original rationale becomes invisible, creating practices whose purpose no one understands but everyone continues to execute (Feldman, 2000).

The balance between organizational memory and forgetting shapes adaptive capacity. Too much memory creates rigidity: the organization cannot deviate from established patterns even when current conditions demand different approaches. Too much forgetting creates instability: the organization repeats past mistakes because institutional memory of why certain approaches fail has been lost (Argote, 1999). Most organizations err toward excessive memory, particularly of successful experiences, creating resistance to adaptation even as environments change (Levinthal & March, 1993).

Environmental Change Outpacing Adaptation

Environments change continuously at varying rates across different dimensions. Technology, competition, regulation, customer preferences, and macroeconomic conditions all shift according to their own dynamics (Tushman & Anderson, 1986). Systems adapt to these changes, but adaptation proceeds at rates determined by internal constraints—decision cycles, resource availability, coordination requirements, political dynamics. When environmental change outpaces adaptive capacity, mismatch accumulates (Sastry, 1997).

This mismatch creates a moving target problem. Systems adapt toward current conditions, but by the time adaptation completes, conditions have shifted. A product optimized for current customer preferences launches into a market with different preferences. A capability developed for current competitive dynamics becomes relevant only after competitive landscape has changed. Adaptation occurs but fails to close the gap between system state and environmental demands (Baum & Singh, 1994).

The rate of environmental change also varies unpredictably. Periods of relative stability alternate with periods of rapid change. Systems optimized for stable environments—those that invest heavily in specialized capabilities and efficient routines—suffer disproportionately during periods of rapid change. Their optimization for stability becomes a source of fragility when stability ends (Romanelli & Tushman, 1994). Conversely, systems maintaining flexibility to handle rapid change pay efficiency costs during stable periods. There exists no adaptation strategy that performs well across all environmental regimes.

When Learning Degrades Performance

Learning can reduce performance when it optimizes for misleading feedback, when it narrows attention to previously successful approaches, or when it creates overconfidence in unreliable knowledge. Superstitious learning—updating beliefs based on spurious correlations—leads actors to repeat behaviors that coincidentally preceded success while abandoning behaviors that were actually effective (Skinner, 1948). An entrepreneur who succeeded despite poor strategy learns to repeat that poor strategy, degrading future performance.

Success-driven learning creates particular vulnerabilities. Success focuses attention on actions that preceded successful outcomes, reducing exploration of alternatives (Levinthal & March, 1993). Organizations learn to repeat successful strategies, refining and optimizing them through repeated application. This refinement increases efficiency within the successful approach while simultaneously reducing capability with alternative approaches. When conditions change such that the previously successful strategy becomes ineffective, the organization possesses highly developed capabilities for an obsolete strategy and underdeveloped capabilities for strategies better suited to current conditions (Audia et al., 2000).

Confidence from experience also degrades performance when that experience provides misleading lessons. Extensive experience in stable environments creates confidence in pattern recognition and intuitive judgment. This confidence persists when individuals enter novel or unstable environments where their experience provides limited guidance (Kahneman & Klein, 2009). The individual applies learned patterns confidently to situations where those patterns don't apply, producing worse outcomes than less experienced individuals who recognize uncertainty and proceed cautiously. Learning created both capability and overconfidence; the capability has limited domain of applicability while the overconfidence extends broadly.

When Decay Occurs Despite Apparent Success

Decay can proceed invisibly during periods of success when that success derives from favorable conditions rather than sustainable capabilities. A venture succeeds because market timing, competitive dynamics, or resource availability temporarily favor its approach. Performance metrics remain strong, providing no signal of underlying capability erosion. Skills atrophy through disuse as current conditions don't require their application. Organizational capabilities narrow as the organization optimizes for current success patterns (Miller, 1993).

When conditions shift, the decay becomes visible. Capabilities that weren't exercised during the success period no longer function effectively. The organization cannot adapt because adaptive capacity decayed while performance remained strong. This pattern appears frequently in market downturns: organizations that thrived during growth periods discover that capabilities required for survival during contractions—cost discipline, operational efficiency, selective resource allocation—have atrophied from disuse (Sutton & Callahan, 1987).

Personnel decay exhibits similar dynamics during success. Individual skills not required for current role execution decay invisibly. A manager promoted into senior leadership loses hands-on operational capabilities. A technical specialist moved into management loses programming skills. During success periods these capability losses produce no negative consequences. When conditions change—the manager needs to execute operationally, the specialist needs to contribute technically—the decay becomes apparent and problematic (Baumard & Starbuck, 2005). Success masked decay that only manifests when conditions demand capabilities no longer present.


Adaptation, learning, and decay operate through feedback mechanisms that generate lag, distortion, and misalignment as reliably as they generate fit between systems and environments. Learning optimizes for experienced conditions while conditions continuously shift. Adaptation addresses immediate pressures while creating path dependencies that constrain future responses. Knowledge accumulates while skills decay. Success masks capability erosion. The persistent gap between adaptive response and environmental demands emerges not from failure to learn but from structural properties of feedback loops, attribution mechanisms, and temporal dynamics that ensure adaptation remains perpetually incomplete.

Supporting Case Studies

CS-001: The Endless Scroll Funnel — Illustrates how learned scrolling behavior persisted despite changing content quality, demonstrating habit persistence beyond utility and delayed recognition of environmental shift.

CS-002: The Assessment Questionnaire — Documents how initial framing created path dependence in subsequent interpretation, with early adaptation (accepting the assessment premise) constraining later responses regardless of accumulating contrary evidence.

CS-007: The Timed Purchase Pop-Up — Shows how time pressure prevented adaptive learning, with feedback loop (countdown timer) creating urgency that blocked the processing of information necessary for informed decision adjustment.

← Back

References

Argote, L. (1999). Organizational learning: Creating, retaining and transferring knowledge. Kluwer Academic Publishers.

Argote, L., & Miron-Spektor, E. (2011). Organizational learning: From experience to knowledge. Organization Science, 22(5), 1123-1137. https://doi.org/10.1287/orsc.1100.0621

Arthur, W., Jr., Bennett, W., Jr., Stanush, P. L., & McNelly, T. L. (1998). Factors that influence skill decay and retention: A quantitative review and analysis. Human Performance, 11(1), 57-101. https://doi.org/10.1207/s15327043hup1101_3

Arthur, W. B. (1989). Competing technologies, increasing returns, and lock-in by historical events. Economic Journal, 99(394), 116-131. https://doi.org/10.2307/2234208

Audia, P. G., Locke, E. A., & Smith, K. G. (2000). The paradox of success: An archival and a laboratory study of strategic persistence following radical environmental change. Academy of Management Journal, 43(5), 837-853. https://doi.org/10.2307/1556413

Bahrick, H. P. (1984). Semantic memory content in permastore: Fifty years of memory for Spanish learned in school. Journal of Experimental Psychology: General, 113(1), 1-29. https://doi.org/10.1037/0096-3445.113.1.1

Baumard, P., & Starbuck, W. H. (2005). Learning from failures: Why it may not happen. Long Range Planning, 38(3), 281-298. https://doi.org/10.1016/j.lrp.2005.03.004

Baum, J. A. C., & Singh, J. V. (1994). Organizational niches and the dynamics of organizational founding. Organization Science, 5(4), 483-501. https://doi.org/10.1287/orsc.5.4.483

Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121-152. https://doi.org/10.1207/s15516709cog0502_2

Christensen, C. M. (1997). The innovator's dilemma: When new technologies cause great firms to fail. Harvard Business School Press.

Cyert, R. M., & March, J. G. (1963). A behavioral theory of the firm. Prentice-Hall.

David, P. A. (1985). Clio and the economics of QWERTY. American Economic Review, 75(2), 332-337.

Denrell, J. (2003). Vicarious learning, undersampling of failure, and the myths of management. Organization Science, 14(3), 227-243. https://doi.org/10.1287/orsc.14.3.227.15162

Einhorn, H. J., & Hogarth, R. M. (1978). Confidence in judgment: Persistence of the illusion of validity. Psychological Review, 85(5), 395-416. https://doi.org/10.1037/0033-295X.85.5.395

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363-406. https://doi.org/10.1037/0033-295X.100.3.363

Feldman, M. S. (2000). Organizational routines as a source of continuous change. Organization Science, 11(6), 611-629. https://doi.org/10.1287/orsc.11.6.611.12529

Feldman, M. S., & Pentland, B. T. (2003). Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly, 48(1), 94-118. https://doi.org/10.2307/3556620

Hannan, M. T., & Freeman, J. (1984). Structural inertia and organizational change. American Sociological Review, 49(2), 149-164. https://doi.org/10.2307/2095567

Hannan, M. T., Pólos, L., & Carroll, G. R. (2006). The fog of change: Opacity and asperity in organizations. Administrative Science Quarterly, 51(3), 399-432. https://doi.org/10.2189/asqu.51.3.399

Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative Science Quarterly, 35(1), 9-30. https://doi.org/10.2307/2393549

Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515-526. https://doi.org/10.1037/a0016755

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284. https://doi.org/10.1037/0033-2909.119.2.254

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134. https://doi.org/10.1037/0022-3514.77.6.1121

Leonard-Barton, D. (1992). Core capabilities and core rigidities: A paradox in managing new product development. Strategic Management Journal, 13(S1), 111-125. https://doi.org/10.1002/smj.4250131009

Levinthal, D. A., & March, J. G. (1993). The myopia of learning. Strategic Management Journal, 14(S2), 95-112. https://doi.org/10.1002/smj.4250141009

Levitt, B., & March, J. G. (1988). Organizational learning. Annual Review of Sociology, 14, 319-340. https://doi.org/10.1146/annurev.so.14.080188.001535

Lopes, L. L. (1982). Doing the impossible: A note on induction and the experience of randomness. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8(6), 626-636. https://doi.org/10.1037/0278-7393.8.6.626

March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71-87. https://doi.org/10.1287/orsc.2.1.71

March, J. G. (2010). The ambiguities of experience. Cornell University Press.

Meyer, R., & Kunreuther, H. (2017). The ostrich paradox: Why we underprepare for disasters. Wharton School Press.

Miller, D. (1993). The architecture of simplicity. Academy of Management Review, 18(1), 116-138. https://doi.org/10.2307/258824

Nelson, R. R., & Winter, S. G. (1982). An evolutionary theory of economic change. Harvard University Press.

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220. https://doi.org/10.1037/1089-2680.2.2.175

Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. Oxford University Press.

Romanelli, E., & Tushman, M. L. (1994). Organizational transformation as punctuated equilibrium: An empirical test. Academy of Management Journal, 37(5), 1141-1166. https://doi.org/10.2307/256669

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. Advances in Experimental Social Psychology, 10, 173-220. https://doi.org/10.1016/S0065-2601(08)60357-3

Sastry, M. A. (1997). Problems and paradoxes in a model of punctuated organizational change. Administrative Science Quarterly, 42(2), 237-275. https://doi.org/10.2307/2393920

Siggelkow, N. (2002). Evolution toward fit. Administrative Science Quarterly, 47(1), 125-159. https://doi.org/10.2307/3094893

Skinner, B. F. (1948). 'Superstition' in the pigeon. Journal of Experimental Psychology, 38(2), 168-172. https://doi.org/10.1037/h0055873

Sterman, J. D. (1989). Modeling managerial behavior: Misperceptions of feedback in a dynamic decision making experiment. Management Science, 35(3), 321-339. https://doi.org/10.1287/mnsc.35.3.321

Sutton, R. I., & Callahan, A. L. (1987). The stigma of bankruptcy: Spoiled organizational image and its management. Academy of Management Journal, 30(3), 405-436. https://doi.org/10.2307/256007

Tushman, M. L., & Anderson, P. (1986). Technological discontinuities and organizational environments. Administrative Science Quarterly, 31(3), 439-465. https://doi.org/10.2307/2392832

Verplanken, B., & Wood, W. (2006). Interventions to break and create consumer habits. Journal of Public Policy & Marketing, 25(1), 90-103. https://doi.org/10.1509/jppm.25.1.90

Walsh, J. P., & Ungson, G. R. (1991). Organizational memory. Academy of Management Review, 16(1), 57-91. https://doi.org/10.2307/258607

Wiley, J. (1998). Expertise as mental set: The effects of domain knowledge in creative problem solving. Memory & Cognition, 26(4), 716-730. https://doi.org/10.3758/BF03211392

Wood, W., & Neal, D. T. (2007). A new look at habits and the habit-goal interface. Psychological Review, 114(4), 843-863. https://doi.org/10.1037/0033-295X.114.4.843