← Back to Index

Feedback Loops, Drift, and Unintended Consequences

Section 6: Technology & Tools — Chapter 4
Feedback Dynamics: Reinforcement, Drift, and Divergence Positive Feedback Loop (Amplification) Initial State Amplified State Escalation Output reinforces input Negative Feedback Loop (Stabilization) Target State Output counteracts deviation System Drift Over Time T₀ Original T₁ Minor drift T₂ Accumulation T₃ Divergence T₄ Unintended Emergent Behaviour Pattern: Small initial deviations → Feedback reinforcement → Parameter adjustment → Cumulative drift → System behaviour diverges from original design without intervention or redesign
Feedback loops structure dynamic systems where outputs influence subsequent inputs, creating cycles that either amplify initial conditions or counteract deviations from target states. Positive feedback reinforces changes, escalating small perturbations into dominant behaviours through repeated amplification, while negative feedback stabilises systems by correcting deviations, maintaining equilibrium around desired states. Drift emerges when feedback-driven adjustments accumulate over time, shifting system parameters, decision thresholds, or operational patterns away from original configurations. Unintended consequences arise not from design flaws or misuse but from the temporal dynamics of systems responding to their own outputs, producing emergent behaviours that diverge from initial intentions as feedback cycles operate across operational iterations.

Feedback loops create circular causation where system outputs become inputs for subsequent operations, establishing iterative processes that modify behaviour across time (Meadows, 2008). The circularity distinguishes feedback from linear processes where outputs terminate rather than recirculate—feedback systems continuously adjust based on prior results, creating temporal coupling between past and future states (Sterman, 2000). A thermostat that monitors temperature and activates heating creates feedback by using current temperature as input for heating decisions, producing outputs that change the input condition for the next operational cycle (Åström & Murray, 2008). The loop structure enables self-regulation—systems respond to their own effects without external intervention—but also creates potential for self-reinforcement where feedback amplifies rather than moderates initial conditions (Ford, 2010). Feedback operates as structural property rather than intentional mechanism, emerging from any configuration where outputs feed back as inputs regardless of whether designers anticipate recursive dynamics.

Positive feedback amplifies deviations, creating exponential growth or decline as outputs reinforce the conditions that produced them (Scheffer et al., 2009). The reinforcement operates through self-strengthening cycles: increases produce conditions favouring further increases, while decreases create conditions accelerating further decreases, establishing runaway dynamics that diverge from initial states (Lenton et al., 2008). Network effects in communication platforms demonstrate positive feedback—each new user increases platform value for existing users, creating incentives for additional adoption that further increases value, producing exponential rather than linear growth (Zhu & Iansiti, 2012). The amplification continues until external constraints—resource limits, saturation effects, countervailing forces—impose boundaries that halt escalation (Biggs et al., 2012). Positive feedback creates instability: small initial differences become magnified through repeated reinforcement, producing outcomes dramatically sensitive to initial conditions where minor perturbations cascade into major divergences across operational time.

Negative feedback counteracts deviations, creating stability by opposing changes that move systems away from target states (Åström & Murray, 2008). The correction operates through opposition: outputs that exceed targets trigger responses reducing output, while outputs below targets trigger responses increasing output, creating oscillation around equilibrium rather than continuous drift (Sterman, 2000). Temperature regulation through heating and cooling systems exemplifies negative feedback—when temperature rises above threshold, cooling activates; when temperature falls below threshold, heating activates, maintaining temperature within specified range through corrective responses (Skogestad & Postlethwaite, 2005). The correction creates resilience: external perturbations that disturb equilibrium trigger compensatory responses that restore original state, preventing drift despite ongoing disturbances (Anderies et al., 2013). However, negative feedback introduces delay—correction occurs after deviation detection, creating lag between disturbance and response that enables temporary excursions from target before stabilisation mechanisms engage.

Time delays between action and feedback create dynamic complexity where systems respond to outdated information, producing oscillation, overshoot, or instability (Sterman, 2000). When feedback arrives slowly relative to system change rate, responses address conditions that no longer exist, generating inappropriate adjustments that amplify rather than correct deviations (Scheffer et al., 2009). Supply chain systems that adjust production based on sales data experience delays—production decisions occur weeks before manufactured goods reach market, creating lags where production responds to past demand rather than current conditions (Lee et al., 1997). The delay creates bullwhip effects: small demand fluctuations at retail level amplify into large production swings because correction signals arrive too late to prevent overcorrection, producing oscillation rather than smooth adjustment (Chen et al., 1997). Delayed feedback transforms stabilising mechanisms into destabilising ones when temporal gaps exceed system responsiveness capacity.

Parameter drift occurs when automated adjustment mechanisms progressively shift operational thresholds, weights, or decision criteria away from original calibrations (Barabas et al., 2018). Adaptive systems that modify parameters based on performance metrics create feedback where parameter adjustments influence performance, which influences subsequent parameter adjustments, establishing recursive optimisation that drifts through parameter space (Ensign et al., 2018). Machine learning systems that retrain on new data containing their own previous predictions gradually shift classification boundaries as prediction-influenced data becomes training input for future models (Perdomo et al., 2020). The drift operates imperceptibly—each adjustment appears reasonable given immediate context—but cumulative shifts produce substantial divergence from initial configuration without discrete transition points that would trigger review or intervention (Selbst et al., 2019). Systems optimised through feedback-driven parameter adjustment evolve continuously, making current configuration increasingly distant from original design as temporal accumulation reshapes operational logic.

Data feedback loops emerge when system outputs shape the data used for system training or calibration, creating circularity where systems learn from their own effects (Ensign et al., 2018). Predictive policing systems that direct enforcement to algorithmically identified areas generate arrest data concentrated in those areas, creating training data skewed by deployment patterns rather than underlying crime distribution (Richardson et al., 2019). The skewed data reinforces prediction patterns—areas with higher arrest rates due to increased presence appear as higher crime areas in data—establishing feedback where predictions influence reality which influences predictions, creating self-fulfilling patterns (Lum & Isaac, 2016). The circularity makes validation impossible through output examination: system predictions prove accurate because deployment shaped outcomes, not because predictions reflected independent reality that validation assumes (O'Neil, 2016). Data feedback transforms descriptive systems into prescriptive ones, with predictions shaping reality rather than merely describing it.

Optimisation drift occurs when systems continuously adjust to maximise specified metrics, producing configurations that excel at measured objectives while degrading unmeasured dimensions (Selbst et al., 2019). Automated systems that optimise for observable targets—click rates, engagement duration, conversion percentages—gradually shift behaviour toward metric maximisation regardless of broader impact on unmeasured outcomes (Harambam et al., 2019). Content recommendation algorithms optimised for engagement time drift toward increasingly extreme or sensational content because such material produces measurable engagement increases, even as unmeasured effects on information quality or polarisation worsen (Bakshy et al., 2015). The drift reflects structural incentive: feedback rewards measured improvement, creating pressure toward parameter configurations that maximise metrics while costs accumulate in unmeasured domains that optimisation ignores (Swart, 2021). Systems become increasingly effective at producing specified outcomes while increasingly problematic along dimensions outside optimisation scope.

Threshold shifts occur when feedback adjusts the criteria for action, progressively changing what conditions trigger responses (Lenton et al., 2008). Adaptive systems that modify decision thresholds based on outcome frequencies establish feedback where threshold changes alter outcome distributions, which influence subsequent threshold adjustments (Biggs et al., 2012). Automated content moderation that adjusts removal thresholds based on volume creates feedback—lowering thresholds increases removals, reducing workload, potentially triggering further threshold lowering, establishing drift toward increasingly permissive or restrictive moderation without explicit policy changes (Gillespie, 2018). The gradual threshold adjustment escapes notice because each change appears marginal, but accumulated shifts substantially alter system behaviour as current thresholds diverge significantly from original calibrations (Scheffer et al., 2009). Threshold drift transforms system character—what triggers intervention, what counts as acceptable, what receives attention—through incremental feedback-driven recalibration rather than deliberate redesign.

Reinforcement amplification occurs when feedback strengthens existing patterns, making dominant behaviours increasingly dominant while suppressing alternatives (Barabas et al., 2018). Systems that allocate resources based on performance metrics create feedback where successful elements receive more resources, improving performance further, triggering additional resource allocation in cumulative advantage dynamics (Salganik et al., 2006). Search ranking algorithms that prioritise frequently clicked results create feedback—higher ranking increases visibility, increasing clicks, justifying higher ranking, establishing reinforcement where initial ranking advantages compound through iterative feedback (Hannak et al., 2014). The amplification reduces diversity: alternatives receive decreasing attention as dominant patterns capture increasing share of system activity, creating winner-take-all dynamics where feedback concentrates outcomes around initially successful options (Tucker & Zhang, 2011). Initial conditions or random fluctuations become magnified into persistent dominance through self-reinforcing feedback that transforms minor advantages into structural entrenchment.

Bias amplification through feedback occurs when systems trained on biased data produce outputs that reinforce existing biases, creating circular dynamics that perpetuate and intensify discrimination (Barocas & Selbst, 2016). Hiring algorithms trained on historical hiring decisions learn patterns reflecting past biases, producing recommendations that favour similar demographic profiles, which become future hiring data when recommendations are implemented (Selbst et al., 2019). The feedback establishes lock-in: biases embedded in training data become embedded in system behaviour, which generates data embedding those biases further, creating self-perpetuating cycles that intensify rather than moderate initial bias (Corbett-Davies et al., 2017). Each iteration appears defensible—decisions reflect data patterns—but cumulative effect amplifies systematic discrimination as feedback reinforces patterns that should diminish rather than strengthen (Chouldechova, 2017). Bias becomes structural property maintained through feedback rather than discrete decisions subject to correction.

Behavioural adaptation creates feedback when users modify actions in response to system operation, producing user behaviours that influence system inputs and subsequent outputs (Perdomo et al., 2020). Credit scoring systems that influence lending decisions create feedback as borrowers adapt behaviour to improve scores—concentrating on measured factors while ignoring unmeasured dimensions—which changes the relationship between scores and creditworthiness that scoring models assume (Hardt et al., 2016). The adaptation invalidates static models: system deployment changes the population it evaluates, making training data increasingly unrepresentative as users learn which behaviours the system rewards (Selbst et al., 2019). Gaming emerges not through malicious manipulation but through rational response to known evaluation criteria, creating feedback where system transparency or predictability enables adaptation that undermines system validity (O'Neil, 2016). Systems shape user behaviour which shapes system inputs, creating co-evolution where neither users nor systems operate as initially designed once feedback establishes interaction dynamics.

Metric displacement occurs when measurement systems create feedback that shifts focus from underlying goals to measured proxies (Muller, 2018). Systems that reward performance on observable metrics create incentives for optimising measurements rather than actual objectives, producing feedback where metric improvement becomes goal displacement for original purpose (Espeland & Sauder, 2016). Educational systems that reward standardised test scores create feedback—schools optimise teaching for test performance, students focus on test-relevant material, test scores improve while unmeasured learning dimensions stagnate—establishing divergence between measured success and educational goals (Koretz, 2017). The displacement operates through rational response to feedback structure: actors optimise what measurement systems reward, causing measured dimensions to improve while unmeasured dimensions that metrics were meant to proxy deteriorate (de Rijcke et al., 2016). Goodhart's Law manifests through feedback—when measures become targets, they cease to be good measures as optimisation behaviour distorts the relationship between proxy and goal.

Escalation dynamics emerge when competitive feedback drives participants to increase commitment, producing arms races where relative position determines action rather than absolute benefit (Schelling, 1978). Systems where success depends on outperforming competitors create feedback—each participant's actions trigger compensatory actions by others, establishing cycles where everyone increases effort while relative positions remain unchanged (Frank & Cook, 2010). Advertising expenditure demonstrates escalation: when one competitor increases spending, others must increase to maintain visibility, creating feedback where aggregate spending rises without improving any participant's relative market position (Larrick et al., 2016). The escalation creates waste—resources expended cancel out across competitors—but feedback structure makes unilateral reduction irrational despite collective irrationality of escalation (Biggs et al., 2012). Competitive feedback locks participants into increasing commitment where stopping appears more costly than continuing despite recognition that escalation serves no participant's interest.

Detection delays enable drift to advance before correction mechanisms engage, allowing systems to diverge substantially before problems become apparent (Sterman, 2000). Gradual parameter shifts or threshold adjustments produce incremental changes too small to trigger immediate attention, but cumulative drift generates significant divergence before magnitude exceeds detection thresholds (Barabas et al., 2018). Systems that monitor for sudden changes may miss slow drift—like gradually increasing bias or steadily shifting decision criteria—because monitoring mechanisms detect discrete anomalies rather than gradual trends (Ensign et al., 2018). The delay creates intervention challenges: by the time drift becomes evident, substantial divergence has occurred, making correction costly and potentially destabilising compared to continuous minor adjustments that would have prevented drift (Scheffer et al., 2009). Detection delay transforms preventable drift into entrenched divergence requiring major intervention rather than routine correction.

Normalisation of deviance occurs when gradual drift establishes new baselines, making current states appear normal despite substantial divergence from original configurations (Vaughan, 1996). Repeated exposure to incremental changes habituates observers to new conditions, reducing perceived abnormality of states that would appear deviant if compared to original design rather than recent history (Haunschild & Sullivan, 2002). Each small drift appears acceptable because it differs only marginally from the previous state, but accumulated shifts produce configurations dramatically different from initial conditions without triggering recognition that normalisation has occurred (Rerup, 2009). The normalisation prevents correction: when drift becomes baseline expectation, current state no longer appears problematic even when objective comparison to original design would reveal substantial divergence (Perrow, 1984). Feedback establishes incremental acceptance that masks cumulative transformation, making detection impossible through contemporaneous observation that lacks historical reference points revealing drift magnitude.

Unintended consequences emerge from feedback dynamics rather than design flaws, arising when systems optimise locally without accounting for broader system interactions (Meadows, 2008). Interventions that address immediate problems through feedback mechanisms may trigger compensatory responses elsewhere in the system, producing outcomes that undermine intervention effectiveness or create new problems (Sterman, 2000). Traffic management that optimises flow on monitored roads creates feedback—congestion shifts to unmonitored routes as drivers adapt, reducing improvement on optimised roads while creating new congestion elsewhere, establishing whack-a-mole dynamics where problem location shifts rather than problem resolving (Youn et al., 2008). The consequences arise from system structure rather than implementation errors: feedback creates adaptation that designers did not anticipate because models assumed static rather than reactive environments (Biggs et al., 2012). Unintended effects reflect inherent challenge of predicting feedback-driven adaptation in complex systems where interventions trigger responses that reshape the system being intervened upon.

Runaway feedback occurs when amplification mechanisms lack inherent limits, producing exponential divergence until external constraints impose boundaries (Lenton et al., 2008). Positive feedback loops without stabilising negative feedback create unstable dynamics where small perturbations cascade into system-wide transformations (Scheffer et al., 2009). Financial market flash crashes demonstrate runaway feedback: automated trading algorithms responding to price movements create feedback where initial declines trigger selling which accelerates declines, establishing cascades that continue until circuit breakers halt trading (Kirilenko & Lo, 2013). The runaway occurs because feedback operates faster than human intervention or inherent stabilisation—systems reach extreme states before corrective mechanisms engage, requiring external interruption rather than internal stabilisation (Brogaard et al., 2014). Runaway dynamics reveal vulnerability in feedback-driven systems: when amplification exceeds damping, stable operation depends on external constraints that may fail to engage before irreversible divergence occurs.

Multi-loop interactions create complex dynamics when multiple feedback processes operate simultaneously, producing emergent behaviours not present in any single loop (Sterman, 2000). Competing feedback mechanisms may reinforce or counteract each other, creating context-dependent dominance where which loop controls system behaviour shifts based on operational state (Biggs et al., 2012). Ecosystem management involves multiple loops—population feedback, resource feedback, predator-prey feedback—interacting to produce aggregate dynamics that single-loop analysis cannot predict (Anderies et al., 2013). The interaction creates nonlinear responses: system reactions to interventions vary depending on which feedback loops currently dominate, making outcomes unpredictable from linear extrapolation or isolated loop examination (Meadows, 2008). Multi-loop systems require holistic analysis that accounts for interaction effects, but opacity typically limits understanding to individual loops while emergent multi-loop dynamics operate unobserved until aggregate outcomes manifest.

Irreversibility emerges when feedback drives systems past tipping points where return to original states becomes impossible without disproportionate intervention (Lenton et al., 2008). Positive feedback can push systems into alternate stable states where different feedback loops dominate, preventing return to original equilibrium even when initial driving forces reverse (Scheffer et al., 2009). Network platform dominance demonstrates irreversibility: once network effects establish user concentration, the platform achieves self-reinforcing stability where switching costs exceed benefits even when alternatives offer superior functionality, creating lock-in that intervention cannot easily overcome (Zhu & Iansiti, 2012). The irreversibility reflects feedback structure: systems that drift into new equilibria stabilise there through feedback mechanisms that resist returning to previous states (Biggs et al., 2012). Drift becomes permanent transformation when feedback establishes self-maintaining configurations that persist despite removal of original driving forces that initiated divergence.

Intervention resistance occurs when feedback dynamics neutralise corrective attempts, producing system robustness that prevents restoration of original states (Anderies et al., 2013). Systems with strong feedback loops respond to interventions by adjusting parameters or behaviours that counteract intervention effects, maintaining current state despite external pressure to change (Sterman, 2000). Attempts to reduce bias in algorithmic systems through threshold adjustment may trigger parameter drift that compensates for explicit corrections, maintaining discriminatory outcomes through different mechanisms rather than eliminating discrimination (Selbst et al., 2019). The resistance reflects feedback adaptation: systems optimised through feedback loops adjust to maintain optimisation targets despite constraints, creating evolutionary pressure that circumvents intervention intent (Ensign et al., 2018). Effective intervention requires disrupting feedback structure rather than adjusting parameters, but feedback opacity often conceals which loops drive behaviour, making structural intervention difficult when mechanisms remain invisible.

Feedback loops create circular causation where outputs influence inputs, establishing dynamics that either amplify deviations through positive feedback or counteract them through negative feedback. Time delays between action and feedback produce oscillation and overshoot when responses address outdated conditions. Parameter drift occurs through repeated feedback-driven adjustments that progressively shift operational configurations away from original calibrations. Data feedback loops emerge when systems train on their own outputs, while optimisation drift produces configurations excelling at measured objectives while degrading unmeasured dimensions. Threshold shifts and reinforcement amplification strengthen existing patterns while suppressing alternatives. Bias amplification, behavioural adaptation, and metric displacement demonstrate how feedback transforms initial conditions into entrenched patterns. Detection delays enable substantial drift before correction mechanisms engage, while normalisation of deviance establishes shifted baselines that mask divergence. Unintended consequences emerge from local optimisation producing system-wide effects, and runaway feedback creates exponential divergence until external constraints impose limits. Multi-loop interactions produce emergent behaviours, while irreversibility and intervention resistance occur when feedback drives systems past tipping points into alternate stable states. Drift and unintended consequences arise not from misuse but from temporal dynamics of systems responding to their own outputs across operational iterations.

Supporting Case Studies

References

Anderies, J. M., Folke, C., Walker, B., & Ostrom, E. (2013). Aligning key concepts for global change policy: Robustness, resilience, and sustainability. Ecology and Society, 18(2), 8. https://doi.org/10.5751/ES-05178-180208
Åström, K. J., & Murray, R. M. (2008). Feedback systems: An introduction for scientists and engineers. Princeton University Press.
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. https://doi.org/10.1126/science.aaa1160
Barabas, C., Dinakar, K., Ito, J., Virza, M., & Zittrain, J. (2018). Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Proceedings of Machine Learning Research (Vol. 81, pp. 1–15).
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31
Biggs, R., Schlüter, M., & Schoon, M. L. (Eds.). (2012). Principles for building resilience: Sustaining ecosystem services in social-ecological systems. Cambridge University Press.
Brogaard, J., Hendershott, T., & Riordan, R. (2014). High-frequency trading and price discovery. The Review of Financial Studies, 27(8), 2267–2306. https://doi.org/10.1093/rfs/hhu032
Chen, F., Drezner, Z., Ryan, J. K., & Simchi-Levi, D. (1997). Quantifying the bullwhip effect in a simple supply chain: The impact of forecasting, lead times, and information. Management Science, 46(3), 436–443. https://doi.org/10.1287/mnsc.46.3.436.12069
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797–806). ACM. https://doi.org/10.1145/3097983.3098095
de Rijcke, S., Wouters, P. F., Rushforth, A. D., Franssen, T. P., & Hammarfelt, B. (2016). Evaluation practices and effects of indicator use—A literature review. Research Evaluation, 25(2), 161–169. https://doi.org/10.1093/reseval/rvv038
Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. In Proceedings of Machine Learning Research (Vol. 81, pp. 1–12).
Espeland, W. N., & Sauder, M. (2016). Engines of anxiety: Academic rankings, reputation, and accountability. Russell Sage Foundation.
Ford, A. (2010). Modeling the environment (2nd ed.). Island Press.
Frank, R. H., & Cook, P. J. (2010). The winner-take-all society: Why the few at the top get so much more than the rest of us. Random House.
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Hannak, A., Soeller, G., Lazer, D., Mislove, A., & Wilson, C. (2014). Measuring price discrimination and steering on e-commerce web sites. In Proceedings of the 2014 Conference on Internet Measurement Conference (pp. 305–318). ACM. https://doi.org/10.1145/2663716.2663744
Harambam, J., Helberger, N., & van Hoboken, J. (2019). Democratizing algorithmic news recommenders: How to materialize voice in a technologically saturated media ecosystem. Philosophical Transactions of the Royal Society A, 376(2133), 20180088. https://doi.org/10.1098/rsta.2018.0088
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (pp. 3315–3323).
Haunschild, P. R., & Sullivan, B. N. (2002). Learning from complexity: Effects of prior accidents and incidents on airlines' learning. Administrative Science Quarterly, 47(4), 609–643. https://doi.org/10.2307/3094911
Kirilenko, A. A., & Lo, A. W. (2013). Moore's Law versus Murphy's Law: Algorithmic trading and its discontents. Journal of Economic Perspectives, 27(2), 51–72. https://doi.org/10.1257/jep.27.2.51
Koretz, D. (2017). The testing charade: Pretending to make schools better. University of Chicago Press.
Larrick, R. P., Soll, J. B., & Keeney, R. L. (2016). Designing better energy metrics for consumers. Behavioral Science & Policy, 1(1), 63–75. https://doi.org/10.1353/bsp.2015.0003
Lee, H. L., Padmanabhan, V., & Whang, S. (1997). The bullwhip effect in supply chains. MIT Sloan Management Review, 38(3), 93–102.
Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., & Schellnhuber, H. J. (2008). Tipping elements in the Earth's climate system. Proceedings of the National Academy of Sciences, 105(6), 1786–1793. https://doi.org/10.1073/pnas.0705414105
Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19. https://doi.org/10.1111/j.1740-9713.2016.00960.x
Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.
Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Perdomo, J., Zrnic, T., Mendler-Dünner, C., & Hardt, M. (2020). Performative prediction. In Proceedings of the 37th International Conference on Machine Learning (pp. 7599–7609). PMLR.
Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Basic Books.
Rerup, C. (2009). Attentional triangulation: Learning from unexpected rare crises. Organization Science, 20(5), 876–893. https://doi.org/10.1287/orsc.1090.0467
Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 94, 192–233.
Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854–856. https://doi.org/10.1126/science.1121066
Scheffer, M., Bascompte, J., Brock, W. A., Brovkin, V., Carpenter, S. R., Dakos, V., Held, H., van Nes, E. H., Rietkerk, M., & Sugihara, G. (2009). Early-warning signals for critical transitions. Nature, 461(7260), 53–59. https://doi.org/10.1038/nature08227
Schelling, T. C. (1978). Micromotives and macrobehavior. W. W. Norton & Company.
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68). ACM. https://doi.org/10.1145/3287560.3287598
Skogestad, S., & Postlethwaite, I. (2005). Multivariable feedback control: Analysis and design (2nd ed.). John Wiley & Sons.
Sterman, J. D. (2000). Business dynamics: Systems thinking and modeling for a complex world. McGraw-Hill.
Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society, 7(2), 1–11. https://doi.org/10.1177/20563051211008828
Tucker, C., & Zhang, J. (2011). Growing two-sided networks by advertising the user base: A field experiment. Marketing Science, 30(5), 805–814. https://doi.org/10.1287/mksc.1110.0641
Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press.
Youn, H., Gastner, M. T., & Jeong, H. (2008). Price of anarchy in transportation networks: Efficiency and optimality control. Physical Review Letters, 101(12), 128701. https://doi.org/10.1103/PhysRevLett.101.128701
Zhu, F., & Iansiti, M. (2012). Entry into platform-based markets. Strategic Management Journal, 33(1), 88–106. https://doi.org/10.1002/smj.941