← Back to Index

Tools as Mediators of Action and Decision

Section 6: Technology & Tools — Chapter 1
Tool-Mediated Action Architecture Human Intention Tool Interface Affordances (Actions Made Easy) Constraints (Actions Made Difficult) Possible Outcomes (Tool-Shaped) Feedback Shapes Subsequent Intentions Automated Decision Points Agency Source Mediation Layer Constrained Output
Tools shape action not through direct physical force but through the architecture of possibility they construct. A tool mediates between intention and outcome by determining which actions become simple, which become complex, which require specialised knowledge, and which become invisible or inaccessible. This mediation operates structurally rather than episodically, establishing persistent patterns in how decisions are formed, actions are executed, and feedback is interpreted. The tool becomes an interface that translates human capability into a constrained action space, where the range of possible outcomes reflects tool design as much as user intent.

Mediation occurs when a tool interposes itself between an actor and an outcome, transforming the relationship between intention and result. Unlike simple extension—where a tool amplifies existing capability without fundamentally altering the action—mediation restructures the action space itself (Norman, 1988). The tool does not merely assist; it redefines what actions are available, efficient, or perceivable within the system (Gibson, 1977; Kaptelinin & Nardi, 2012). A calculator mediates arithmetic by eliminating manual computation steps, but it also eliminates visibility into computational process, creating dependence on the tool's internal logic. The mediation creates efficiency at the cost of transparency, a trade-off embedded in the tool's architecture rather than in any individual transaction.

Affordances represent the actions a tool makes perceptually and functionally accessible to users (McGrenere & Ho, 2000). An affordance does not simply exist; it emerges from the interaction between tool properties and user capabilities, rendering certain actions obvious while others remain obscure (Gaver, 1991). Interface elements such as buttons, sliders, and menus afford specific interaction patterns by making those patterns visually and functionally salient (Maier & Fadel, 2009). The affordance structures user perception, directing attention toward tool-supported actions while leaving unsupported actions cognitively distant. A system that affords one-click purchasing but requires multi-step verification for refunds shapes behaviour through differential accessibility, making acquisition easy and reversal difficult. The asymmetry is not accidental; it reflects deliberate affordance design that prioritises certain outcomes over others.

Constraints function as the structural inverse of affordances, determining what actions the tool prevents, complicates, or renders invisible (Lockton et al., 2010). Physical constraints limit actions through material properties—software that restricts file access, hardware that prevents unauthorised modifications—while cognitive constraints shape behaviour by increasing the mental effort required for disfavoured actions (Tromp et al., 2011). Procedural constraints embed restrictions within workflows, requiring authentication, approval, or multi-step verification before certain actions become possible (Friedman & Hendry, 2019). Each constraint type operates by increasing friction at specific decision points, raising the threshold for action execution without explicitly prohibiting behaviour. A system that allows instant message sending but requires confirmation dialogs for message deletion imposes asymmetric friction, shaping the likelihood that deletion occurs without formally preventing it.

Path dependency emerges when tool adoption creates structural commitments that constrain future choices (Arthur, 1989). Early decisions about tool selection establish technical standards, data formats, and skill investments that increase the cost of switching to alternative systems (Zhu & Iansiti, 2012). The path dependency operates through accumulated infrastructure—trained behaviours, integrated workflows, existing data repositories—that makes continuation easier than transition (Sydow et al., 2009). Network effects amplify path dependency when tool utility increases with user base size, creating self-reinforcing adoption patterns that entrench dominant systems regardless of technical superiority (Shapiro & Varian, 1998). An organisation that adopts a specific data management system invests in staff training, custom integrations, and proprietary data formats, each investment raising the barrier to migration. The tool selection becomes irreversible not through formal lock-in but through accumulated dependence that makes alternatives prohibitively costly.

Delegation transfers decision-making authority from human actors to technical systems, substituting algorithmic judgement for human evaluation (Parasuraman & Riley, 1997). The delegation may occur through explicit automation—systems that execute predefined rules without human intervention—or through recommendation systems that constrain choice sets by presenting algorithmically filtered options (Lee & See, 2004). Delegated systems compress decision spaces by pre-selecting options, ranking alternatives, or executing default actions unless explicitly overridden (Zuboff, 2019). The compression creates efficiency by reducing cognitive load, but it also obscures the criteria governing selection, making the delegated process opaque to the user who receives only filtered results (Burrell, 2016). A content curation algorithm that surfaces certain articles while suppressing others delegates editorial judgement to technical logic, presenting users with a pre-filtered information environment without visibility into selection mechanisms.

Automation introduces persistent action execution that operates independently of continuous human oversight (Bainbridge, 1983). Automated systems monitor conditions, apply decision rules, and execute responses without requiring human confirmation at each step, creating uninterrupted operation that continues until interrupted or reconfigured (Sheridan & Parasuraman, 2005). The automation shifts attention from action execution to exception handling, repositioning human actors as monitors who intervene only when automated processes encounter boundary conditions (Endsley, 2017). This creates structural vigilance demands: humans must maintain awareness of processes they do not actively control, detecting anomalies within system behaviour that may indicate malfunction or unintended outcomes (Hancock et al., 2013). Automated trading systems that execute thousands of transactions per second based on algorithmic rules require monitoring for aberrant behaviour, but the speed and volume of operations exceed human perceptual capacity, creating dependence on secondary monitoring tools that themselves introduce additional mediation layers.

Feedback loops through tool-mediated systems shape behaviour by providing selective information about action outcomes (Froehlich et al., 2010). The tool determines what information is captured, how it is processed, and when it is presented, constructing a filtered view of system state that influences subsequent decisions (Caraban et al., 2019). Immediate feedback—such as real-time analytics dashboards—creates tight coupling between action and response, encouraging rapid iteration but also promoting reactive rather than reflective decision-making (Kluger & DeNisi, 1996). Delayed or aggregated feedback—such as monthly performance reports—decouples action from outcome, reducing perceived causality and weakening behavioural reinforcement (Larrick et al., 2016). The tool shapes not only what feedback is received but also its temporal structure, determining whether users perceive immediate consequences or only later, abstracted summaries of cumulative effects.

Normalisation occurs when tool-mediated behaviour becomes standard practice, rendering the mediation itself invisible (Star, 1999). Repeated use of a tool embeds its logic into routine operations, transforming conscious adoption into unconscious habit (Leonardi, 2011). The normalisation extends beyond individual users to organisational and social levels, where tool-mediated processes become institutional expectations that shape role definitions, performance metrics, and coordination protocols (Orlikowski, 2000). A tool initially adopted to improve efficiency becomes a required component of standard operating procedures, embedding its affordances and constraints into formal requirements. The mediation transitions from optional enhancement to structural necessity, making alternative approaches not merely less efficient but institutionally non-compliant.

Dependence emerges when tool-mediated capabilities displace direct human competencies, creating structural reliance on continued tool access (Carr, 2008). The dependence operates through skill atrophy—the degradation of abilities that fall into disuse when delegated to tools—and through structural integration—the embedding of tool functions into processes that cannot operate without them (Sparrow et al., 2011). Navigation systems that provide turn-by-turn directions reduce reliance on spatial memory and map-reading skills, creating dependence on tool availability for wayfinding tasks that were previously performed through direct environmental engagement (Ishikawa et al., 2008). The dependence creates vulnerability: tool failure, inaccessibility, or intentional withdrawal disrupts processes that can no longer revert to pre-tool methods without significant capability rebuilding.

Compression of decision spaces occurs when tools reduce the range of options presented to users, filtering possibilities according to internal logic that may not align with user priorities (Swart, 2021). Recommendation systems compress vast option sets into curated subsets, presenting algorithmically selected alternatives while excluding others from consideration (Eslami et al., 2015). The compression shapes perception of available choices, creating the impression that presented options represent the full or best set when they reflect algorithmic prioritisation criteria that remain unspecified (Gillespie, 2014). A job search platform that surfaces certain listings based on keyword matching and employer bidding presents a compressed decision space where visibility depends on technical matching rather than comprehensive opportunity representation. Users make selections from the compressed set, unaware of excluded alternatives that failed to meet algorithmic thresholds.

Interface design determines how tool functions are accessed, organising capabilities into hierarchies that prioritise certain features over others (Johnson, 2014). Primary functions receive prominent placement—large buttons, menu priority, default settings—while secondary functions require navigation through sub-menus, settings panels, or advanced configurations (Shneiderman et al., 2016). The hierarchical organisation shapes usage patterns by making frequently accessed functions easy while relegating others to expert territory that casual users rarely explore (Blackler et al., 2014). A communication platform that places 'send message' prominently while burying 'delete account' under multiple navigation layers structures user behaviour through differential accessibility, making continuation easy and exit difficult without formally restricting either action.

Default configurations establish baseline settings that operate unless explicitly changed, shifting the decision burden from opt-in to opt-out (Johnson & Goldstein, 2003). Defaults leverage status quo bias—the tendency to accept pre-set conditions rather than expend effort to modify them—creating persistent configurations that reflect designer preferences rather than active user choices (Samuelson & Zeckhauser, 1988). The default becomes the effective choice for most users, even when alternatives are technically available, because modification requires awareness, effort, and knowledge of configuration options (Böhme & Köpsell, 2010). Privacy settings that default to maximum sharing, notification systems that default to all alerts enabled, and feature subscriptions that default to auto-renewal create usage patterns where most users operate under designer-selected configurations rather than personalised preferences.

Tool-mediated perception alters how users interpret environmental conditions by filtering, aggregating, and presenting information according to tool logic (Kitchin & Dodge, 2011). Analytical dashboards compress complex datasets into visual summaries, selecting metrics, time periods, and comparison frameworks that shape interpretation of underlying phenomena (Few, 2006). The tool determines what becomes visible and how it is contextualised, constructing a mediated reality that may diverge significantly from raw data patterns (Boyd & Crawford, 2012). Users perceive the processed view as objective representation, unaware that metric selection, aggregation methods, and visualisation choices embed interpretive assumptions that privilege certain patterns while obscuring others. A performance dashboard that highlights individual productivity metrics while omitting collaborative contributions constructs a particular understanding of organisational effectiveness, directing attention toward measurable individual outputs rather than distributed collective processes.

Abstraction layers within technical systems conceal operational complexity, presenting simplified interfaces that hide underlying mechanisms (Floridi, 2011). The abstraction makes tools accessible to non-expert users by eliminating the need to understand implementation details, but it also creates opacity regarding how inputs transform into outputs (Dourish, 2004). Users interact with high-level functions—commands like 'calculate total,' 'send message,' 'optimise route'—without visibility into computational processes, data flows, or decision logic that execute behind the interface (Ananny & Crawford, 2018). The opacity becomes problematic when systems produce unexpected outcomes, errors, or biased results, because users lack access to diagnostic information that would enable understanding or correction of tool behaviour. A loan approval system that outputs accept/reject decisions without revealing scoring algorithms or decision thresholds creates accountability gaps where neither applicants nor human overseers can evaluate whether outcomes reflect appropriate criteria.

Intermediate representations within tools—such as file formats, data schemas, and protocol standards—determine how information is stored, transmitted, and transformed across system boundaries (Edwards et al., 2013). These representations embed assumptions about data structure, permissible values, and relationship types that constrain what information can be captured and how it can be processed (Bowker & Star, 1999). A data entry form that requires classification into predefined categories forces fit between complex reality and simplified taxonomy, losing nuance at the point of capture (Schlesinger et al., 2017). The representation becomes the working reality for downstream processes that operate only on captured data, perpetuating simplifications and exclusions embedded in the original tool design. Information that cannot be expressed within the tool's representational framework becomes invisible to processes that rely on tool-mediated data.

Coordination between tools creates interdependencies where malfunction or incompatibility in one system cascades through connected processes (Perrow, 1984). Integrated systems that share data, trigger processes, or synchronise operations across platforms introduce complexity where component failures propagate unpredictably (Orlikowski & Scott, 2008). The interdependence creates brittleness: a system that functions reliably in isolation may fail when connected to others, producing emergent behaviours that do not exist within individual components (Kallinikos et al., 2013). An e-commerce platform that integrates inventory management, payment processing, shipping logistics, and customer communication relies on continuous coordination across systems where failure in any component disrupts the entire transaction flow, creating operational fragility masked by normal-operation reliability.

Tool selection criteria often prioritise immediate functionality over long-term structural implications, creating adoption patterns where convenience dominates considerations of dependence, control, or reversibility (Hanseth & Lyytinen, 2010). The temporal mismatch between evident benefits and hidden costs produces decisions that lock in structural commitments whose full implications emerge only after widespread adoption (Tilson et al., 2010). A communication platform adopted for its ease of use gradually accumulates organisational dependence as conversation archives, workflow integrations, and social networks embed within the tool, creating exit barriers that were not apparent during initial adoption. The tool transitions from optional convenience to structural necessity without explicit decision-making about the shift, embedding mediation through incremental normalisation rather than deliberate commitment.

Skill displacement occurs when tool-mediated processes eliminate the need for capabilities that were previously required, creating generational gaps where newer practitioners never develop competencies that older generations consider foundational (Carr, 2014). Automated spell-checking reduces reliance on orthographic knowledge, GPS navigation displaces map-reading and spatial reasoning, and algorithmic design assistants reduce manual layout skills (Ward, 2013). The displacement creates vulnerability when tools fail or prove inadequate for novel situations, because the displaced skills are no longer available within the user population (Henrich, 2015). A workforce trained exclusively on automated systems lacks fallback capabilities when systems malfunction, creating dependence where continued operation requires tool functionality that cannot be substituted with direct human performance.

Learning through tools differs from learning about tools, creating knowledge that is procedurally bound to specific systems rather than conceptually portable across contexts (Salomon et al., 1991). Users develop expertise in navigating particular interfaces—knowing which buttons to press, which menus to access—without necessarily understanding underlying principles that would enable transfer to alternative systems (Koedinger et al., 2012). The tool-specific knowledge creates fragility where system changes—interface redesigns, feature deprecations, platform migrations—invalidate accumulated expertise, forcing relearning cycles that would not occur if knowledge were grounded in conceptual understanding rather than procedural familiarity (Sweller, 2010). Training programmes that teach 'how to use the software' without addressing 'what the software is doing' produce users who can operate current systems but cannot adapt when tools change or fail.

Visibility of tool mediation varies inversely with normalisation: systems that are deeply embedded in routine practice become invisible as mediators, perceived as transparent extensions rather than active filters (Verbeek, 2006). The invisibility creates conditions where users attribute outcomes to their own actions or to external circumstances rather than recognising tool influence on decision processes and available options (Introna, 2011). A search engine that shapes information access through ranking algorithms becomes invisible infrastructure, with users attributing search results to relevance rather than recognising algorithmic mediation that constructs relevance according to proprietary criteria. The invisibility shields the tool from scrutiny, allowing mediation to operate as background infrastructure rather than as an active filter subject to evaluation or contestation.

Reversibility of tool adoption depends on whether mediation creates structural changes that persist after tool removal (Monteiro et al., 2013). Some tools serve as temporary assistants that leave no lasting trace—calculators that perform arithmetic without altering mathematical understanding—while others create lasting dependence through data formats, skill displacement, or institutional integration that cannot be easily unwound (Constantinides & Barrett, 2015). Irreversibility emerges when tool-mediated processes generate outputs—stored data, established workflows, trained behaviours—that cannot revert to pre-tool states without substantial loss (Henfridsson et al., 2014). A document creation platform that stores files in proprietary formats creates exit barriers where discontinuation requires data conversion, potential information loss, and compatibility issues with alternative tools, making continued use easier than migration despite dissatisfaction with the original tool.

Agency distribution shifts when tools automate decision-making, creating ambiguity about whether outcomes reflect human intention or system logic (Leonardi, 2013). Responsibility becomes diffuse when actions result from human-tool interaction where neither party alone produced the outcome, complicating accountability frameworks designed for clear human or technical causation (Martin, 2019). An algorithmic hiring system that scores candidates based on resume analysis distributes agency between human recruiters who configure the system and algorithmic processes that execute scoring, creating uncertainty about whether outcomes reflect recruiter judgement or emergent system behaviour. The distribution enables deflection—humans blame the algorithm, designers cite proper functionality—without clear mechanisms for determining causality or assigning responsibility for problematic outcomes.

Tools mediate by constructing the interface between intention and outcome, determining which actions become accessible, efficient, visible, or possible within a given context. This mediation operates through affordances that direct behaviour toward certain paths, constraints that restrict alternative routes, and defaults that establish baseline configurations shaping most users' experiences. Delegation transfers judgement to technical systems, automation enables persistent execution without oversight, and feedback loops shape learning through selective information presentation. Path dependency locks in early choices through accumulated infrastructure, while normalisation renders mediation invisible as tools become standard practice. Dependence emerges when displaced skills and integrated workflows create structural reliance on continued tool access, and agency distribution complicates accountability as outcomes reflect neither pure human intention nor pure algorithmic execution. The tool shapes not only how tasks are performed but also what tasks become thinkable, achievable, and routine within the system it constructs.

Supporting Case Studies

References

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Arthur, W. B. (1989). Competing technologies, increasing returns, and lock-in by historical events. The Economic Journal, 99(394), 116–131. https://doi.org/10.2307/2234208
Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779. https://doi.org/10.1016/0005-1098(83)90046-8
Blackler, A., Mahar, D., & Popovic, V. (2014). Intuitive interaction applied to interface design. International Journal of Human-Computer Studies, 72(3), 327–341. https://doi.org/10.1016/j.ijhcs.2013.10.002
Böhme, R., & Köpsell, S. (2010). Trained to accept? A field experiment on consent dialogs. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2403–2406). ACM. https://doi.org/10.1145/1753326.1753689
Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press.
Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878
Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
Caraban, A., Karapanos, E., Gonçalves, D., & Campos, P. (2019). 23 ways to nudge: A review of technology-mediated nudging in human-computer interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15). ACM. https://doi.org/10.1145/3290605.3300733
Carr, N. (2008). Is Google making us stupid? The Atlantic, 302(1), 56–63.
Carr, N. (2014). The glass cage: Automation and us. W. W. Norton & Company.
Constantinides, P., & Barrett, M. (2015). Information infrastructure development and governance as collective action. Information Systems Research, 26(1), 40–56. https://doi.org/10.1287/isre.2014.0542
Dourish, P. (2004). What we talk about when we talk about context. Personal and Ubiquitous Computing, 8(1), 19–30. https://doi.org/10.1007/s00779-003-0253-8
Edwards, P. N., Jackson, S. J., Bowker, G. C., & Knobel, C. P. (2013). Understanding infrastructure: Dynamics, tensions, and design. In Deep Blue. University of Michigan. https://doi.org/10.3998/3336451.0001.001
Endsley, M. R. (2017). From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). "I always assumed that I wasn't really that close to [her]": Reasoning about invisible algorithms in news feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153–162). ACM. https://doi.org/10.1145/2702123.2702556
Few, S. (2006). Information dashboard design: The effective visual communication of data. O'Reilly Media.
Floridi, L. (2011). The informational nature of personal identity. Minds and Machines, 21(4), 549–566. https://doi.org/10.1007/s11023-011-9259-6
Friedman, B., & Hendry, D. G. (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press.
Froehlich, J., Findlater, L., & Landay, J. (2010). The design of eco-feedback technology. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1999–2008). ACM. https://doi.org/10.1145/1753326.1753629
Gaver, W. W. (1991). Technology affordances. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 79–84). ACM. https://doi.org/10.1145/108844.108856
Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, acting, and knowing (pp. 67–82). Lawrence Erlbaum.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2013). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 55(3), 517–527. https://doi.org/10.1177/0018720812465082
Hanseth, O., & Lyytinen, K. (2010). Design theory for dynamic complexity in information infrastructures: The case of building internet. Journal of Information Technology, 25(1), 1–19. https://doi.org/10.1057/jit.2009.19
Henfridsson, O., Mathiassen, L., & Svahn, F. (2014). Managing technological change in the digital age: The role of architectural frames. Journal of Information Technology, 29(1), 27–43. https://doi.org/10.1057/jit.2013.30
Henrich, J. (2015). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press.
Introna, L. D. (2011). The enframing of code: Agency, originality and the plagiarist. Theory, Culture & Society, 28(6), 113–141. https://doi.org/10.1177/0263276411418131
Ishikawa, T., Fujiwara, H., Imai, O., & Okabe, A. (2008). Wayfinding with a GPS-based mobile navigation system: A comparison with maps and direct experience. Journal of Environmental Psychology, 28(1), 74–82. https://doi.org/10.1016/j.jenvp.2007.09.002
Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339. https://doi.org/10.1126/science.1091721
Johnson, J. (2014). Designing with the mind in mind: Simple guide to understanding user interface design guidelines (2nd ed.). Morgan Kaufmann.
Kallinikos, J., Aaltonen, A., & Marton, A. (2013). The ambivalent ontology of digital artifacts. MIS Quarterly, 37(2), 357–370. https://doi.org/10.25300/MISQ/2013/37.2.02
Kaptelinin, V., & Nardi, B. (2012). Activity theory in HCI: Fundamentals and reflections. Synthesis Lectures on Human-Centered Informatics, 5(1), 1–105. https://doi.org/10.2200/S00413ED1V01Y201203HCI013
Kitchin, R., & Dodge, M. (2011). Code/space: Software and everyday life. MIT Press.
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284. https://doi.org/10.1037/0033-2909.119.2.254
Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The Knowledge-Learning-Instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.x
Larrick, R. P., Soll, J. B., & Keeney, R. L. (2016). Designing better energy metrics for consumers. Behavioral Science & Policy, 1(1), 63–75. https://doi.org/10.1353/bsp.2015.0003
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
Leonardi, P. M. (2011). When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies. MIS Quarterly, 35(1), 147–167. https://doi.org/10.2307/23043493
Leonardi, P. M. (2013). When does technology use enable network change in organizations? A comparative study of feature use and shared affordances. MIS Quarterly, 37(3), 749–775. https://doi.org/10.25300/MISQ/2013/37.3.04
Lockton, D., Harrison, D., & Stanton, N. A. (2010). The Design with Intent Method: A design tool for influencing user behaviour. Applied Ergonomics, 41(3), 382–392. https://doi.org/10.1016/j.apergo.2009.09.001
Maier, J. R., & Fadel, G. M. (2009). Affordance-based design methods for innovative design, redesign and reverse engineering. Research in Engineering Design, 20(4), 225–239. https://doi.org/10.1007/s00163-009-0064-7
Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850. https://doi.org/10.1007/s10551-018-3921-3
McGrenere, J., & Ho, W. (2000). Affordances: Clarifying and evolving a concept. In Proceedings of Graphics Interface 2000 (pp. 179–186). Canadian Information Processing Society.
Monteiro, E., Pollock, N., Hanseth, O., & Williams, R. (2013). From artefacts to infrastructures. Computer Supported Cooperative Work, 22(4–6), 575–607. https://doi.org/10.1007/s10606-012-9167-1
Norman, D. A. (1988). The psychology of everyday things. Basic Books.
Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404–428. https://doi.org/10.1287/orsc.11.4.404.14600
Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the separation of technology, work and organization. The Academy of Management Annals, 2(1), 433–474. https://doi.org/10.1080/19416520802211644
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Basic Books.
Salomon, G., Perkins, D. N., & Globerson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2–9. https://doi.org/10.3102/0013189X020003002
Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7–59. https://doi.org/10.1007/BF00055564
Schlesinger, A., O'Hara, K. P., & Taylor, A. S. (2017). Let's talk about race: Identity, chatbots, and AI. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). ACM. https://doi.org/10.1145/3173574.3173889
Shapiro, C., & Varian, H. R. (1998). Information rules: A strategic guide to the network economy. Harvard Business Press.
Sheridan, T. B., & Parasuraman, R. (2005). Human-automation interaction. In R. S. Nickerson (Ed.), Reviews of human factors and ergonomics (Vol. 1, pp. 89–129). Human Factors and Ergonomics Society. https://doi.org/10.1518/155723405783703082
Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N. (2016). Designing the user interface: Strategies for effective human-computer interaction (6th ed.). Pearson.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778. https://doi.org/10.1126/science.1207745
Star, S. L. (1999). The ethnography of infrastructure. American Behavioral Scientist, 43(3), 377–391. https://doi.org/10.1177/00027649921955326
Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society, 7(2), 1–11. https://doi.org/10.1177/20563051211008828
Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane cognitive load. Educational Psychology Review, 22(2), 123–138. https://doi.org/10.1007/s10648-010-9128-5
Sydow, J., Schreyögg, G., & Koch, J. (2009). Organizational path dependence: Opening the black box. Academy of Management Review, 34(4), 689–709. https://doi.org/10.5465/amr.2009.44885978
Tilson, D., Lyytinen, K., & Sørensen, C. (2010). Research commentary—Digital infrastructures: The missing IS research agenda. Information Systems Research, 21(4), 748–759. https://doi.org/10.1287/isre.1100.0318
Tromp, N., Hekkert, P., & Verbeek, P.-P. (2011). Design for socially responsible behavior: A classification of influence based on intended user experience. Design Issues, 27(3), 3–19. https://doi.org/10.1162/DESI_a_00087
Verbeek, P.-P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values, 31(3), 361–380. https://doi.org/10.1177/0162243905285847
Ward, A. F. (2013). Supernormal: How the Internet is changing our memories and our minds. Psychological Inquiry, 24(4), 341–348. https://doi.org/10.1080/1047840X.2013.850148
Zhu, F., & Iansiti, M. (2012). Entry into platform-based markets. Strategic Management Journal, 33(1), 88–106. https://doi.org/10.1002/smj.941
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.