← Back to Index

Abstraction, Opacity, and Black-Box Systems

Section 6: Technology & Tools — Chapter 3
Abstraction Layers: From Visible Process to Opaque Output Input (Visible) Data, Request, Parameters Abstraction Layers Layer 1: User Interface Visible controls, displays Simplified representations 30% Layer 2: Application Logic Business rules, workflows Partially documented 15% Layer 3: Algorithmic Processing Statistical models, ML systems Proprietary/complex logic 5% Layer 4: System Infrastructure Data stores, networks, APIs Hidden dependencies 2% Layer 5: Hardware/Physical Processors, storage, circuits Complete opacity to users 0% Visibility Output (Visible) Result, Decision, Recommendation Opaque Processing Region Trust without comprehension Causal Traceability: Input → ? → Output Error Detection: Delayed until output Accountability: Diffused across layers
Abstraction simplifies interaction by concealing operational complexity, presenting users with high-level interfaces that hide underlying mechanisms. While abstraction enables accessibility—allowing non-experts to operate complex systems—it simultaneously creates opacity, separating surface interaction from internal process. Black-box systems emerge when this separation becomes complete: inputs enter, outputs emerge, but the transformation pathway remains invisible or incomprehensible to users and often to operators. Opacity introduces dependence on outcomes without process understanding, shifting trust from comprehension—confidence based on knowing how systems work—to acceptance—reliance based on observing that systems produce results. This transition alters oversight capacity, error detection timing, and accountability distribution across technical infrastructures.

Abstraction layers structure technical systems hierarchically, with each layer concealing implementation details from layers above while providing simplified interfaces for interaction (Parnas, 1972). The layering creates separation of concerns—users interact with high-level functions without needing to understand low-level mechanisms that execute those functions—enabling specialisation where expertise concentrates at appropriate system levels (Dijkstra, 1968). A software application presents users with buttons, menus, and displays while concealing database queries, memory management, and network protocols that operate beneath the interface (Tanenbaum & Wetherall, 2010). Each abstraction level provides functionality to the level above while hiding complexity that would overwhelm users if exposed, creating usability through deliberate information reduction (Abelson & Sussman, 1996). The reduction enables operation without complete understanding but also removes visibility into processes that determine how inputs transform into outputs.

Opacity emerges when abstraction eliminates causal traceability—the ability to follow pathways from input through transformation to output (Burrell, 2016). Transparent systems allow inspection of intermediate states, decision points, and processing steps that connect initial conditions to final results (Rudin, 2019). Opaque systems present only endpoints—what went in and what came out—without revealing transformation logic, creating inscrutability where users cannot determine why specific inputs produced particular outputs (Wachter et al., 2017). A credit scoring algorithm that outputs accept/reject decisions without disclosing scoring criteria, factor weights, or threshold logic operates as black box: applicants see their input data and the final decision but cannot trace how one produced the other (Barocas & Selbst, 2016). The opacity prevents reverse-engineering—working backward from outputs to understand decision logic—making systems uninterpretable even when functionality appears consistent.

Complexity-driven opacity arises when system behaviour emerges from interactions too numerous or intricate for human comprehension, even when technical details are theoretically accessible (Brooks, 1987). Large-scale systems with thousands of components, millions of lines of code, or billions of parameter values exceed cognitive capacity for holistic understanding (Lehman, 1996). Machine learning models with deep neural architectures contain so many weighted connections that no individual—including designers—can fully explain how specific inputs propagate through the network to produce particular outputs (LeCun et al., 2015). The complexity creates functional inscrutability distinct from deliberate concealment: the system logic exists in principle but exceeds practical interpretability, making comprehensive understanding unattainable regardless of access rights or technical expertise (Castelvecchi, 2016). Users confront systems they cannot meaningfully inspect because inspection would require processing capacity beyond human cognitive limits.

Proprietary opacity results from deliberate information restriction, where system owners withhold technical details to protect competitive advantages, trade secrets, or strategic positioning (Pasquale, 2015). Closed systems provide no access to source code, decision algorithms, or operational logic, presenting only interfaces for interaction and outputs for observation (Gillespie, 2014). The restriction creates asymmetric knowledge: system operators possess complete understanding while users operate with only surface-level awareness of functionality (Kitchin, 2017). Search ranking algorithms, content recommendation systems, and automated decision platforms often withhold criteria and weighting logic, revealing only that certain inputs produce certain outputs without disclosing transformation mechanisms (Gillespie, 2018). The proprietary restriction serves business interests—preventing competitors from replicating functionality—but also shields systems from scrutiny that might reveal biases, errors, or unintended behaviours embedded within concealed logic.

Interface-mediated opacity occurs when system design presents simplified views that compress complex processes into simple indicators or controls (Dourish, 2004). Dashboards that display aggregated metrics, progress bars showing completion percentages, or status lights indicating system state reduce multidimensional processes to one-dimensional representations (Few, 2006). The compression serves usability—users need actionable information, not exhaustive technical detail—but it also obscures underlying complexity that determines whether simple indicators accurately reflect system state (Caraban et al., 2019). A battery indicator showing 50% charge may conceal degradation patterns, temperature dependencies, or usage profiles that affect actual remaining capacity, presenting simplified state that diverges from nuanced reality (Ferreira et al., 2011). Users interact with representations rather than reality, creating dependency on interface accuracy without ability to verify whether surface indicators correspond to underlying conditions.

Temporal opacity emerges when systems operate at speeds exceeding human perception or across timescales too extended for continuous monitoring (Kirilenko & Lo, 2013). High-frequency trading systems execute thousands of transactions per second, creating decision-action cycles that complete before human observation or intervention becomes possible (Brogaard et al., 2014). Long-duration processes spanning months or years—such as infrastructure degradation, climate systems, or institutional evolution—unfold too gradually for immediate feedback, creating delayed causality where outcomes manifest long after initiating conditions (Meadows, 2008). Both extremes create opacity through temporal mismatch: processes occur too fast or too slow for human perceptual systems designed for intermediate timescales, making causation invisible even when system logic remains accessible (Vertesi, 2014). Users cannot observe transformation processes directly, relying instead on summaries, logs, or aggregated data that represent rather than reveal actual operational sequences.

Distributed opacity arises in systems where functionality emerges from interaction across multiple components, platforms, or organisations, creating causation too dispersed for localised understanding (Kallinikos et al., 2013). Integrated systems that span different technical infrastructures, cross organisational boundaries, or combine services from multiple providers create emergent behaviour where outcomes reflect collective interaction rather than any single component's logic (Orlikowski & Scott, 2008). An online purchase may involve payment processors, inventory systems, shipping logistics, credit verification, and fraud detection across separate platforms, each contributing decision inputs that collectively determine transaction outcome (Constantinides & Barrett, 2015). The distribution prevents comprehensive visibility: no single actor sees the complete decision pathway, and integration points where systems interact introduce dependencies invisible to any individual component (Tilson et al., 2010). Users confront collective outcomes without access to distributed causation that produced them.

Trust calibration shifts when opacity eliminates understanding-based confidence, forcing reliance on observation of reliability rather than comprehension of mechanism (Lee & See, 2004). Transparent systems enable trust grounded in verified logic—users understand how systems work and trust because they can confirm appropriate operation (Hancock et al., 2013). Opaque systems require trust based on consistent performance—users cannot verify logic but observe that outputs prove reliable, building confidence through repeated successful outcomes rather than through understanding (Kizilcec, 2016). The shift creates fragility: trust persists only while performance remains consistent, collapsing when opacity-obscured errors produce unexpected failures that users cannot diagnose or predict (Parasuraman & Manzey, 2010). A navigation system trusted through successful routing loses credibility when errors occur, but opacity prevents users from determining whether failures reflect systematic flaws or isolated anomalies, making appropriate trust recalibration impossible without process visibility.

Error concealment occurs when abstraction layers hide faults, allowing problems to propagate undetected through opaque processes until they manifest in outputs (Perrow, 1984). Transparent systems enable early error detection—intermediate state visibility allows identification of problems before they cascade—while opaque systems reveal errors only at output stage, after transformation processes have completed (Leveson, 2011). A data pipeline that ingests, transforms, and analyses information may contain validation errors, classification mistakes, or calculation faults that remain invisible until final outputs prove incorrect, by which time identifying error sources requires reverse-engineering through opaque processing stages (O'Neil, 2016). The delayed detection increases error impact—problems affect more downstream processes before discovery—and complicates diagnosis by obscuring where in the transformation chain faults originated (Ensign et al., 2018). Opacity transforms errors from detectable anomalies into mysteries requiring forensic investigation.

Validation challenges intensify with opacity, as verification requires comparing outputs to expected results without ability to confirm that intermediate processes operated correctly (Barabas et al., 2018). Output validation catches systematic errors that produce consistently incorrect results but may miss subtle biases, edge-case failures, or context-dependent faults that manifest only under specific conditions (Selbst et al., 2019). A hiring algorithm that systematically excludes qualified candidates based on protected characteristics may produce outputs that appear reasonable—rejected applicants lack qualifications on paper—while embedding discrimination within opaque scoring logic that output validation cannot detect (Barocas & Selbst, 2016). Process transparency would enable examination of scoring criteria and factor weights, revealing problematic correlations, but opacity restricts validation to output patterns that may conceal rather than expose systematic problems (Corbett-Davies et al., 2017). Users must infer correctness from outcomes without access to decision pathways that produced them.

Accountability diffusion occurs when opacity prevents clear assignment of responsibility for system behaviour and outcomes (Martin, 2019). Transparent systems enable tracing outcomes to specific decision points, design choices, or operational parameters, supporting responsibility attribution to actors who controlled those elements (Gillespie, 2018). Opaque systems obscure causation pathways, creating ambiguity about whether problems reflect design flaws, implementation errors, training data biases, deployment misconfigurations, or emergent behaviours unanticipated by any actor (Selbst et al., 2019). The ambiguity enables deflection—designers cite proper functionality, operators reference approved usage, users lack access to system internals—without mechanisms for establishing causation that accountability requires (Leonardi, 2013). Harmful outcomes occur without assignable fault, creating responsibility gaps where system opacity shields actors from scrutiny by preventing demonstration of how their choices produced problematic results.

Dependency without understanding emerges when opacity-obscured systems become essential infrastructure that users rely upon despite lacking comprehension of operational mechanisms (Star, 1999). Critical systems—power grids, financial networks, communication platforms—embed within social and economic structures while remaining technically opaque to the populations they serve (Introna, 2011). The dependency creates vulnerability: when opaque systems fail, malfunction, or behave unexpectedly, dependent users lack knowledge needed to diagnose problems, implement workarounds, or revert to alternative approaches (Hanseth & Lyytinen, 2010). A supply chain coordination system that becomes essential to inventory management, ordering, and distribution creates organisational dependence where system outage or malfunction disrupts operations, but opacity prevents users from understanding failure modes, anticipating vulnerabilities, or maintaining independent capabilities that would enable continuation without system access (Monteiro et al., 2013). Reliance deepens as alternatives atrophy, but comprehension does not accompany dependence.

Interpretability techniques attempt to create visibility into opaque systems, using approximation methods, sensitivity analysis, or model-agnostic explanations to illuminate how inputs influence outputs (Rudin, 2019). Local interpretability explains individual predictions by identifying which input features most strongly influenced specific outcomes, while global interpretability characterises overall system behaviour across input distributions (Ribeiro et al., 2016). Attention mechanisms, saliency maps, and counterfactual explanations provide partial visibility into black-box decision processes, revealing factors the system weighted heavily without exposing complete decision logic (Guidotti et al., 2018). However, interpretability techniques themselves operate as approximations—simplified models of complex systems—creating second-order opacity where explanations may not accurately reflect underlying mechanisms they purport to illuminate (Selbst & Barocas, 2018). An attention map showing which image regions influenced classification reveals correlation but not causation, providing partial insight without complete understanding of transformation processes that connect visual features to category assignments.

Explainability requirements in regulatory contexts mandate that automated decision systems provide justifications for outputs, creating pressure to reduce opacity in domains where decisions affect legal rights or significant interests (Wachter et al., 2017). The requirements assume that explanation enables oversight—if systems can articulate why they made decisions, humans can evaluate whether reasoning reflects appropriate criteria (Edwards & Veale, 2017). However, explanation generation may itself be opaque, with systems producing post-hoc rationalisations that satisfy formal requirements without revealing actual decision pathways (Ananny & Crawford, 2018). A loan approval system might explain that credit score, income, and employment history determined rejection, but without disclosing factor weights, threshold values, or interaction effects between variables—providing explanation that appears meaningful while concealing decision logic that explanation purports to reveal (Selbst & Barocas, 2018). Formal explainability coexists with practical opacity when explanations remain too simplified, technical, or incomplete to enable genuine understanding.

Skill requirements for comprehension increase with system complexity, creating stratified opacity where understanding becomes expertise territory inaccessible to general users (Crawford & Joler, 2018). Simple systems allow broad comprehension—users can understand mechanisms with moderate effort—while complex systems demand specialised knowledge in mathematics, computer science, statistics, or domain-specific disciplines (O'Neil, 2016). A neural network trained on millions of examples using gradient descent optimisation across thousands of parameters requires understanding of calculus, linear algebra, probability theory, and machine learning principles to comprehend operation, creating accessibility barriers where technical documentation exists but remains incomprehensible to non-specialists (LeCun et al., 2015). The expertise requirement creates knowledge asymmetry: a small technical elite possesses understanding while the broader population affected by system outputs operates with only surface-level awareness, unable to evaluate whether systems function appropriately or embed problematic assumptions within their opacity-concealed logic.

Normalisation of opacity occurs when black-box operation becomes standard practice, reducing expectations for understanding and increasing acceptance of inscrutability as inevitable feature of complex systems (Introna, 2011). Repeated interaction with opaque systems trains users to operate without comprehension, substituting procedural knowledge—how to use the system—for conceptual understanding—how the system works (Salomon et al., 1991). The normalisation extends from individual adaptation to institutional acceptance, where organisations, regulators, and policymakers cease demanding transparency as opacity becomes industry standard or technical inevitability (Pasquale, 2015). Users stop asking how systems work because experience establishes that such questions receive no satisfactory answers, shifting expectations from understanding to functional operation—systems need only produce acceptable outputs, not provide visibility into transformation processes (Star, 1999). The normalisation reduces pressure for transparency, enabling opacity to deepen as users and overseers accept inscrutability as unavoidable condition of technical sophistication.

Auditability mechanisms attempt to create external verification capacity despite internal opacity, using controlled inputs, output monitoring, or comparative testing to detect systematic problems without requiring process understanding (Sandvig et al., 2014). Black-box auditing tests systems by submitting inputs with known characteristics and examining outputs for patterns indicating bias, discrimination, or error (Diakopoulos, 2015). Comparative testing evaluates whether similar inputs receive similar outputs, detecting disparate treatment that may indicate problematic decision logic even when that logic remains concealed (Sandvig et al., 2014). The approach enables oversight without transparency—auditors need not understand how systems work to identify when they fail to work appropriately—but suffers limitations from opacity-driven constraints: auditing detects only patterns observable in outputs, missing problems that manifest subtly or require process visibility to recognise (Ensign et al., 2018). Systematic bias that affects all evaluated cases equally may remain invisible to output comparison, requiring process inspection that black-box auditing cannot provide.

Information asymmetry between system operators and system subjects creates power imbalances where operators possess complete understanding while subjects encounter opacity (Pasquale, 2015). The asymmetry enables exploitation: operators can optimise system behaviour to serve interests subjects cannot observe or evaluate, extracting value through mechanisms opacity conceals from scrutiny (Kitchin, 2017). Algorithmic pricing systems that adjust cost based on user characteristics, browsing history, or predicted willingness to pay operate opaquely to buyers who see only final prices without visibility into personalisation logic that may charge different rates for identical goods (Hannak et al., 2014). The opacity prevents subjects from detecting discrimination, comparison shopping across personalised prices, or evaluating whether pricing reflects costs versus user profiling, creating information advantage that operators leverage while subjects operate without awareness that advantage exists (Ezrachi & Stucke, 2016). Opacity becomes strategic asset that protects advantageous practices from examination by those practices affect.

Emergent behaviour in complex systems creates opacity through unpredictability—outcomes that arise from component interactions without being explicitly programmed or anticipated by designers (Lehman, 1996). Multi-agent systems, interconnected platforms, or adaptive algorithms may produce collective behaviours that no individual component exhibits, creating system-level properties invisible at component level and unanticipated during design (Kallinikos et al., 2013). Flash crashes in financial markets emerge from high-frequency trading algorithm interactions that individual algorithms do not intend but collective operation produces, creating cascade effects invisible to any single algorithm's logic (Kirilenko & Lo, 2013). The emergence creates fundamental opacity: behaviour patterns exist only at system level, making them undetectable through component inspection and unpredictable from component design (Brogaard et al., 2014). Users and designers confront systems whose behaviours cannot be fully predicted or explained through analysis of constituent parts, requiring empirical observation of emergent patterns that theory alone cannot anticipate.

Documentation decay occurs when technical specifications, design rationales, and operational logic become outdated, incomplete, or inaccessible, transforming initially transparent systems into opaque ones through knowledge loss (Lehman, 1996). Systems modified over time through patches, updates, and incremental changes accumulate complexity that documentation fails to capture, creating divergence between documented design and actual implementation (Brooks, 1987). Personnel turnover removes institutional knowledge as developers who understood implementation details leave, taking undocumented insights with them (de Souza et al., 2005). The decay operates gradually—each modification slightly reduces correspondence between documentation and reality—until accumulated changes render original specifications unreliable guides to current operation (Parnas, 1994). Legacy systems operating for years or decades become opaque through knowledge erosion, with current operators inheriting systems they cannot fully comprehend because understanding that once existed has dissipated through time and turnover.

Interface evolution can increase opacity when updates prioritise simplification over transparency, progressively removing controls or visibility that enabled user understanding (Dourish, 2004). Redesigns that streamline interaction by eliminating advanced settings, hiding configuration options, or automating previously manual choices create ease-of-use improvements that simultaneously reduce user control and process visibility (Shneiderman et al., 2016). A software update that replaces detailed configuration menus with simplified auto-detect functionality improves accessibility for novice users while removing transparency for experienced users who understood and controlled previous explicit settings (Blackler et al., 2014). The evolution reflects design trade-offs—broader accessibility versus deeper control—but creates temporal opacity where long-term users lose understanding they previously possessed as interfaces remove mechanisms that provided visibility into system operation.

Opacity amplification occurs when systems layer upon other opaque systems, creating compound inscrutability where understanding requires penetrating multiple black boxes simultaneously (Kallinikos et al., 2013). Platforms that integrate third-party services combine their own opacity with opacity from external systems, producing outcomes that reflect multiple concealed transformation processes (Constantinides & Barrett, 2015). A mobile application that uses cloud storage, payment processing, location services, and advertising networks operates through four opaque systems—the app's internal logic plus three external services—each contributing decision inputs while concealing their contribution mechanisms (Tilson et al., 2010). The layering creates multiplication rather than addition of opacity: understanding requires penetrating not just one black box but several nested or interacting boxes, making comprehensive visibility practically unattainable even when individual components might theoretically allow inspection (Orlikowski & Scott, 2008). Users encounter compound systems whose behaviour reflects aggregated inscrutability exceeding any single component's opacity.

Surveillance opacity creates asymmetric visibility where systems observe user behaviour in detail while concealing observation mechanisms, data collection practices, and inference processes from those observed (Lyon, 2014). Data collection systems capture user actions, communications, locations, and preferences through mechanisms users cannot see or control, creating comprehensive visibility into user behaviour without reciprocal transparency about what data is collected, how it is processed, or how inferences are used (Zuboff, 2015). Behavioural tracking, profiling algorithms, and predictive systems operate as one-way mirrors: users perform actions visible to systems while systems' observation and inference processes remain invisible to users (Crawford & Joler, 2018). The asymmetry enables knowledge extraction without informed consent—users cannot meaningfully evaluate privacy implications of systems whose data practices remain opaque—creating power imbalances where subjects lack awareness of how comprehensive observation informs targeting, manipulation, or control (Pasquale, 2015). Opacity shields surveillance mechanisms while surveillance itself eliminates opacity around observed subjects, producing structural visibility imbalance favouring system operators.

Abstraction creates usability by concealing complexity, but the concealment simultaneously produces opacity that eliminates causal traceability and process understanding. Black-box systems emerge through complexity exceeding cognitive limits, proprietary restrictions protecting competitive interests, or interface designs that compress multidimensional processes into simplified representations. Opacity shifts trust from comprehension to acceptance, forcing reliance on observed reliability rather than verified logic. Error concealment delays fault detection until outputs reveal problems, while validation challenges prevent verification of processes hidden within opaque transformation pathways. Accountability diffusion occurs when opacity prevents tracing outcomes to responsible actors, and dependency without understanding creates vulnerability where reliance deepens while comprehension does not. Interpretability techniques provide partial visibility but themselves operate as approximations that may not accurately reflect underlying mechanisms. Normalisation of opacity reduces expectations for understanding, while auditability mechanisms enable limited oversight through output testing that cannot detect all problems. Information asymmetry creates power imbalances, emergent behaviour produces fundamental unpredictability, and opacity amplification through system layering creates compound inscrutability. The result is not merely reduced visibility but structural transformation where systems operate beyond meaningful human oversight, generating outcomes through processes that remain systematically incomprehensible to those affected by their operation.

Supporting Case Studies

References

Abelson, H., & Sussman, G. J. (1996). Structure and interpretation of computer programs (2nd ed.). MIT Press.
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Barabas, C., Dinakar, K., Ito, J., Virza, M., & Zittrain, J. (2018). Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Proceedings of Machine Learning Research (Vol. 81, pp. 1–15).
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31
Blackler, A., Mahar, D., & Popovic, V. (2014). Intuitive interaction applied to interface design. International Journal of Human-Computer Studies, 72(3), 327–341. https://doi.org/10.1016/j.ijhcs.2013.10.002
Brogaard, J., Hendershott, T., & Riordan, R. (2014). High-frequency trading and price discovery. The Review of Financial Studies, 27(8), 2267–2306. https://doi.org/10.1093/rfs/hhu032
Brooks, F. P. (1987). No silver bullet: Essence and accidents of software engineering. Computer, 20(4), 10–19. https://doi.org/10.1109/MC.1987.1663532
Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
Caraban, A., Karapanos, E., Gonçalves, D., & Campos, P. (2019). 23 ways to nudge: A review of technology-mediated nudging in human-computer interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15). ACM. https://doi.org/10.1145/3290605.3300733
Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538(7623), 20–23. https://doi.org/10.1038/538020a
Constantinides, P., & Barrett, M. (2015). Information infrastructure development and governance as collective action. Information Systems Research, 26(1), 40–56. https://doi.org/10.1287/isre.2014.0542
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797–806). ACM. https://doi.org/10.1145/3097983.3098095
Crawford, K., & Joler, V. (2018). Anatomy of an AI system: The Amazon Echo as an anatomical map of human labor, data and planetary resources. AI Now Institute and Share Lab.
de Souza, C. R., Redmiles, D., & Dourish, P. (2005). "Breaking the code", moving between private and public work in collaborative software development. In Proceedings of the 2005 International ACM SIGGROUP Conference on Supporting Group Work (pp. 105–114). ACM. https://doi.org/10.1145/1099203.1099219
Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411
Dijkstra, E. W. (1968). Go to statement considered harmful. Communications of the ACM, 11(3), 147–148. https://doi.org/10.1145/362929.362947
Dourish, P. (2004). What we talk about when we talk about context. Personal and Ubiquitous Computing, 8(1), 19–30. https://doi.org/10.1007/s00779-003-0253-8
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18–84.
Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. In Proceedings of Machine Learning Research (Vol. 81, pp. 1–12).
Ezrachi, A., & Stucke, M. E. (2016). Virtual competition: The promise and perils of the algorithm-driven economy. Harvard University Press.
Ferreira, D., Dey, A. K., & Kostakos, V. (2011). Understanding human-smartphone concerns: A study of battery life. In Pervasive Computing (pp. 19–33). Springer. https://doi.org/10.1007/978-3-642-21726-5_2
Few, S. (2006). Information dashboard design: The effective visual communication of data. O'Reilly Media.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2013). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 55(3), 517–527. https://doi.org/10.1177/0018720812465082
Hannak, A., Soeller, G., Lazer, D., Mislove, A., & Wilson, C. (2014). Measuring price discrimination and steering on e-commerce web sites. In Proceedings of the 2014 Conference on Internet Measurement Conference (pp. 305–318). ACM. https://doi.org/10.1145/2663716.2663744
Hanseth, O., & Lyytinen, K. (2010). Design theory for dynamic complexity in information infrastructures: The case of building internet. Journal of Information Technology, 25(1), 1–19. https://doi.org/10.1057/jit.2009.19
Introna, L. D. (2011). The enframing of code: Agency, originality and the plagiarist. Theory, Culture & Society, 28(6), 113–141. https://doi.org/10.1177/0263276411418131
Kallinikos, J., Aaltonen, A., & Marton, A. (2013). The ambivalent ontology of digital artifacts. MIS Quarterly, 37(2), 357–370. https://doi.org/10.25300/MISQ/2013/37.2.02
Kirilenko, A. A., & Lo, A. W. (2013). Moore's Law versus Murphy's Law: Algorithmic trading and its discontents. Journal of Economic Perspectives, 27(2), 51–72. https://doi.org/10.1257/jep.27.2.51
Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087
Kizilcec, R. F. (2016). How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390–2395). ACM. https://doi.org/10.1145/2858036.2858402
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
Lehman, M. M. (1996). Laws of software evolution revisited. In Software Process Technology (pp. 108–124). Springer. https://doi.org/10.1007/BFb0017737
Leonardi, P. M. (2013). When does technology use enable network change in organizations? A comparative study of feature use and shared affordances. MIS Quarterly, 37(3), 749–775. https://doi.org/10.25300/MISQ/2013/37.3.04
Leveson, N. G. (2011). Engineering a safer world: Systems thinking applied to safety. MIT Press.
Lyon, D. (2014). Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data & Society, 1(2), 1–13. https://doi.org/10.1177/2053951714541861
Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850. https://doi.org/10.1007/s10551-018-3921-3
Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.
Monteiro, E., Pollock, N., Hanseth, O., & Williams, R. (2013). From artefacts to infrastructures. Computer Supported Cooperative Work, 22(4–6), 575–607. https://doi.org/10.1007/s10606-012-9167-1
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the separation of technology, work and organization. The Academy of Management Annals, 2(1), 433–474. https://doi.org/10.1080/19416520802211644
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12), 1053–1058. https://doi.org/10.1145/361598.361623
Parnas, D. L. (1994). Software aging. In Proceedings of the 16th International Conference on Software Engineering (pp. 279–287). IEEE Computer Society Press.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Basic Books.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). ACM. https://doi.org/10.1145/2939672.2939778
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
Salomon, G., Perkins, D. N., & Globerson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2–9. https://doi.org/10.3102/0013189X020003002
Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 22, 1–23.
Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87, 1085–1139.
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68). ACM. https://doi.org/10.1145/3287560.3287598
Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N. (2016). Designing the user interface: Strategies for effective human-computer interaction (6th ed.). Pearson.
Star, S. L. (1999). The ethnography of infrastructure. American Behavioral Scientist, 43(3), 377–391. https://doi.org/10.1177/00027649921955326
Tanenbaum, A. S., & Wetherall, D. (2010). Computer networks (5th ed.). Prentice Hall.
Tilson, D., Lyytinen, K., & Sørensen, C. (2010). Research commentary—Digital infrastructures: The missing IS research agenda. Information Systems Research, 21(4), 748–759. https://doi.org/10.1287/isre.1100.0318
Vertesi, J. (2014). Seamful spaces: Heterogeneous infrastructures in interaction. Science, Technology, & Human Values, 39(2), 264–284. https://doi.org/10.1177/0162243913516012
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5