Microsoft Word - Special-Issue-xCoax2020.docx CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 18 On False Augmented Agency and What Surveillance Capitalism and User-Centered Design Have to Do With It Rodrigo Hernández-Ramírez IADE, Universidade Europeia, Lisbon, Portugal UNIDCOM/IADE, Lisbon, Portugal ----- rodrigo.ramirez@universidadeeuropeia.pt ----- ABSTRACT Recently, there has been a surge in AI-powered products. Often marketed as “free”, these services operate as hooks to lure unsuspecting users into voluntarily giving up data about every aspect of their life. Their data is the primary fuel of surveillance capitalism, a new economic system that exclusively benefits so-called Big Tech organisations at the expense of personal privacy and freedom of choice. This paper argues the ways these AI-powered products are being imagined and designed is further generalising a kind of “enframing” that encourages a bureaucratic relationship with the world disguised as (a false sense of) augmented agency. This paper shows that technologically informed philosophical reflections can contribute to getting ourselves back into the feedback loop of technological mediation by helping us recognise our “becoming” with technologies as a design process. KEYWORDS Augmented agency; Bureaucratic technologies; Philosophy of technology; Self-transformation; Surveillance capitalism; User-centered Design. 1 | INTRODUCTION In the last years, thanks to recent developments in AI methods, computing hardware, and network connectivity, there has been a surge in “smart” devices claiming to improve people’s lives. Thanks to ubiquitous computing, multitudes of objects in our built environment can double as interactive and interconnected nodes for the Internet, and as sensors that gather data about every aspect of human life. The growing stream of information generated by the Internet of Things (IoT) is allowing organisations employing new machine learning methods and data gathering to make detailed, albeit questionably accurate, forecasts about our behaviour (McNeil, 2018; O’Neil, 2016; Thompson, 2019; van Dijck, 2014; Varghese, 2019). The rationale behind the development of AI-powered devices and services often invokes User-Centered Design (UCD) principles, as well as Weiser’s (1991/1999) optimistic vision of ubiquitous computing. The reasoning is first that by anticipating people’s needs and providing relevant contextual information and suggestions, technologies will become not only more usable but useful and meaningful; secondly, that by offering this level of tailoring while blending unobtrusively into the background and remaining ready-to-hand, smart devices and services will further improve people’s overall experience of daily life. However, recent events such as the Cambridge Analytica scandal (Gadwalladr & Graham-Harrison, 2018) and the role of YouTube in the growth of far-right politics (Fischer & Taub, 2019) have raised serious questions about our relationship with smart technologies at large and with so-called Big Tech in particular. Automating tasks with the help of artificial devices is arguably a defining human feature (Martinho- Truswell, 2018). While humans are not the only creatures that employ tools, no other animal’s development, behaviour, and well-being is so strongly shaped by technology as ours (Ihde & Malafouris, 2018). So called Big Tech organisations providing data-based AI-powered products are surreptitiously exploiting with impunity our technological susceptibilities. Their business models have given rise to “surveillance capitalism”. This economic system follows a radically new logic of accumulation fuelled by data analytics that is curtailing our privacy, our freedom of choice (Naughton, 2019; Zuboff, 2015, 2019), and even our capacity to self-transform. Through dishonest forms of automation that “hypernudge” (Yeung, 2016) and effectively lock people into behaviours and processes “for which they have no legitimate need or desire” (Girardin, 2019), the new “Data Barons” (Mayer- Schönberger & Cukier, 2013) alienate people from their information and decision-making. [1] Moreover, these levels of control show the degree to which the “corporate bureaucratic culture” (Graeber, 2012) has taken over society, deliberately putting otherwise “poetic technologies” [2] entirely under the service of “total bureaucratisation”. This paper argues the way AI-powered IoT devices are being designed is not only consolidating CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 19 surveillance capitalism but also promoting a limiting technological “enframing” (see Zwier, Blok, & Lemmens, 2016) disguised as augmented agency. It shows that despite claiming to follow UCD principles, smart devices and services offering hyper- personalisation are generalising plutocratic, unimaginative interactions with the world. This paper admits there are no immediate solutions to “fix” the above issues, but claims that a good starting point is making sure that, when it comes to augmenting agency, human freedom of choice is always privileged. It contends that keeping humans on the (decision) feedback loop means calling into question the idea that AI-powered devices need to operate seamlessly in the background of human experience, for this often implies trading human control for technical convenience. Furthermore, it argues that we need to rethink what we understand by the user’s real needs and requirements and what UCD means in the age of smart automation. 2 | THE PERILS OF SURVEILLANCE CAPITALISM In our “hyperhistorical” information-dependent societies, [3] the built environment is being increasingly populated by billions of sensors embedded within IoT devices. Human activities have become overwhelmingly mediated by computational technology. This is a radical shift because, unlike previous forms of automation, [4] computers “informate” processes; i.e., they generate data about when they are used, for which purpose, and by whom, thus making the most minute details about their usage knowable (Zuboff, 2015). As a result, potentially every human activity can be “datafied”, [5] i.e., “rendered in a new symbolic dimension as events, objects, processes”, thus making people and their behaviour “visible, knowable, and shareable” (2015, p. 78). This availability of data (rather than breakthroughs in algorithm design per se) is what allows contemporary AI methods to achieve the unprecedented levels of efficiency and accuracy they exhibit today. The vast amounts of data generated directly or indirectly by smart devices are collected, stored, abstracted, aggregated, and analysed [6] by Big Tech and by every emerging organisation wanting to take part in the contemporary economy. Usually, these organisations contend that it is only by knowing their users intimately (i.e., by gaining access to their everyday data), that the products and services they offer can reduce “frictions”, i.e., become more efficient, usable, more comfortable to live with, and desirable. This argument echoes the general principles of UCD, which is arguably the dominant approach in product development, Interaction Design, User Experience Design, and other fields within the tech industry. UCD, a cluster of processes with its origins in early Human–Computer Interaction (HCI) research, privileges a humanistic outlook, arguing that user’s needs ought to be put above the system’s functional requirements when designing (Baek, Cagiltay, Boling, & Frick, 2007; Norman & Draper, 1986; Wallach & Scholz, 2012). Knowing users nowadays is not merely reduced to carrying out market research in the traditional sense (Floridi, 2019; Ruckenstein & Granroth, 2019). It makes sense for organisations to engage in large-scale data collection. Whether this information yields actual knowledge about the user is beyond the point. The problem, however, is that having in mind the user’s needs is rarely the case, particularly when it comes to AI-powered devices and services. Automation has, at least since the Industrial Revolution, allowed organisations to reduce costs and increase revenues; adopting AI-driven automation is, therefore, a reasonable, logical step. AI is data-hungry; it needs vast amounts of input to operate, adapt and grow; this explains in part why organisations are also eagerly adopting Big Data schemes and practices. [7] However, there is another more disturbing reason why most organisations now deliberately design their products and services to extract as much data as possible from their customers: users themselves have become the resource and the product as well as their target. As Foster & McChesney (2014) and Zuboff (2015) argue, Big Data embodies the new logic of appropriation and accumulation underpinning a novel economic system which they have named “surveillance capitalism”. Surveillance capitalism emerged from the auction-style advertising model Google pioneered in the early 2000s, wherein companies pay to have their ads tailored based on user’s data and behaviour patterns. Since then, this model has been adopted and refined by the FAANGS [8] (Facebook, Apple, Amazon, Netflix, and Spotify) and by almost every other major technology company. Under surveillance capitalism, organisations appropriate people’s “data exhaust”—that is, the data people shed as a by-product of their actions and movements in the world (Mayer-Schönberger & Cukier, 2013)—to forecast and modify their behaviour to increase revenue and obtain further market control (Zuboff, 2015, pp. 75–80). Under surveillance capitalism, people are offered access to convenient services they presumably want (e.g., participating in a social network) in exchange for accepting an unrelenting invasion of their privacy. The problem, as Zuboff (2015) notes, is that privacy implies deciding whether one wants to keep something secret or not. By “hyper-nudging” (see Yeung, 2016) users into surrendering their ability to keep their information, their beliefs, and their wishes, the new Data Barons are limiting people’s capacity to choose and, therefore, are curtailing their fundamental rights. Along with being illegitimate, this kind of pact is CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 20 Faustian because users often ignore the scope, degree, and frequency of the surveillance they will be subjected to. Their consent thus often resembles that of the compulsive gambling addict, as Yeung (2016, pp. 131–132) notes. Under surveillance capitalism, there are no contractual reciprocities. Data extraction is automated and unidirectional, a process that leaves no space for negotiation or the kind of lawful relationships based on social trust. Users’ behaviours when interacting with a given service may be rewarded or punished based on opaque automated decisions—e.g., users may be expelled from the service (Kurtis, 2019). This “formal indifference” towards the people who are both the sources of data and the target of Big Data analytics is the hallmark that distinguishes surveillance capitalism from previous economic systems (Zuboff, 2015, pp. 76– 80). The “free” services provided by Data Barons are not objects of value that are exchanged in a transaction but rather serve as “hooks” that lure unsuspecting users into an asymmetrical and indifferent relationship with a technology (Yeung, 2016; Zuboff, 2015). Voluntary submission into constant surveillance not only curtails our freedom of choice but erodes our capacity to self-transform because technologies in general and these, in particular, have tremendous influence over how we perceive the world, as we will see in the following section. 2.1 | BECOMING THROUGH TECHNOLOGY Traditionally, humanistic analyses concerning technological agency, including Critical Theory (in the “narrow” and “wide” senses (see Bohman, 2016)) and the early philosophy of technology (Heidegger, 1954/1977b; e.g., Mumford, 1967), regarded technologies mostly in pessimistic terms. They usually establish a sharp division between human nature and technics; portraying technology as a tyrannical force that “enframes” [9] our mindset and threatens to overwhelm human agency. In the last decades, however, there have been two important shifts in the way philosophers of technology think about “being human”. First, they now recognise the artificial dimensions of human nature; they have realised that human beings actually “become constituted through making and using technologies” because tools shape our minds and augment our capacities (Ihde & Malafouris, 2018). Secondly, there are more attempts to rethink the place of humans in the world; to “re-place” human agency (Galanos, 2017) and develop frameworks that account for the agency of non-human agents in our environment. Most contemporary posthumanist currents endorse— with varying degrees of strength—both stances, including Actor-Network Theory (ANT), postphenomenology, speculative realism, new materialism, and informational structural realism, to name a few (see for example Bogost, 2012; Cudworth & Hobden, 2014; Floridi, 2002; Rosenberger, 2014). Crafting and using tools is not exclusive to humans. The fact that other animals, such as great apes, birds, or cetaceans develop and share technologies, and also enjoy doing so is well documented (Garber, 2014; Jacobs, Bayern, & Osvath, 2016; McCoy et al., 2019). What distinguishes us from non-human animals in terms of technology is not merely the degree to which we have incorporated it into our lives (which is unparalleled). We use technologies to enhance our capacities, but we do it largely by delegating tasks to autonomous systems (Martinho- Truswell, 2018). [10] In so doing, we become dependent and hence, intrinsically linked with the myriads of devices populating our built environment. Unlike non-human animals, our Lebenswelt or lifeworld is defined by a constantly evolving relationship with artificial objects. [11] It follows, as Ihde & Malafouris (2018) suggest, that even though human evolution has been usually characterised in terms of adaptation, it would be more appropriate to describe it in dialectical terms, as a technically- mediated and often intentional “becoming”. Unlike non-human animals, we fabricate our tools and therefore we also fabricate our circumstances. To paraphrase Ortega y Gasset (1939/1964), a defining feature of being human is our constant struggle to “make our existence” to bring about what is yet to be. As a species, humans are outstanding makers. Our technical capacities allow us not only to design and manipulate things in our environment but also to determine when and how we change; how we self- consciously “become” (Ihde, 2009). This “self- fabrication” implies that we continuously find ourselves “first and foremost, in the situation of the technician” (Ortega y Gasset 1939/1964, p. 341). Technically mediated human becoming has arguably been going on for a long time, perhaps as far back as the time of Acheulean axes (roughly 1.7 million years ago). However, in the last decades, we have made our world friendlier towards devices that have comparatively more agency and autonomy and, therefore, a stronger influence over human actions and decisions. These artefacts are often “inflexible, stubborn, intolerant of mistakes, and unlikely to change”; whereas humans tend to be exactly the opposite (Floridi, 2012). Clumsy automation can be either the result of unintentional “bad design” caused by biases, disregard for edge cases, or poor calibration; but it can also be deliberately created for dishonest reasons, as Girardin (2019) notes. As our dependency on smart devices increases so do the chances that they end up calling the shots; distorting and constraining our behaviour and our physical and conceptual environments to further accommodate us to them instead of the other way around (Floridi, 2012). The danger is that instead of CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 21 establishing healthy dialectic relationships we end up adapting to their “needs” only “because that is the best, or sometimes the only, way to make things work” (2012, pp. 252–3). Examples abound where things have to be done in cumbersome ways to accommodate the use of a given technology, even if we no longer notice it—e.g., how human movement and urban planning, in general, have been conditioned by the adoption of motor vehicles. Choosing which technologies we incorporate into our lives is a crucial matter since they play a key role in our self-design (Pitt, 2011). The problem, however, is that we lack a method or framework to do so beyond simple heuristics because we cannot know in advance (only speculate) how a given technology will affect our lives in the long term. The problem is made worse because we are being surrounded by an increasing number of sophisticated systems that are specifically designed to influence our behaviour. 3 | FROM POETIC TO BUREAUCRATIC TECHNOLOGIES Arguably, the “formal indifference” characterising surveillance capitalism is symptomatic of the broader cultural shifts brought by more than four decades of gradual merging between private and public power in the name of profit—i.e., of neoliberalism (see Brown, 2015). Chief amongst these cultural changes is total bureaucratisation: “the imposition of impersonal rules and regulations […] backed up by the threat of force” (Graeber, 2015, p. 32) over every aspect of daily life, in such a pervasive manner that people cannot imagine things could be done differently. As a result of this process, bureaucracy has become “the water in which we swim” (2015, p. 4) and every resource, particularly technological change (a.k.a. “innovation”) has been put to the service of management. So although in the minds of those belonging to the managerial class, both private and public organisations now prioritise creativity and innovation thanks to them, reality shows the exact opposite is true. Instead of investing in technologies that could bring alternative, more egalitarian futures, organisations have prioritised the development of more sophisticated systems to increase further “labour discipline and social control” (Graeber, 2015, p. 120). Organisations have not addressed the challenges famously identified by Keynes (1930/2011) almost a hundred years ago. Nor have they brought more radical changes such as establishing a “four-hour workweek” or universal income, nor developed fantastic innovations such as building flying cars, and colonies on the Moon. Instead, we ended up with infallible ATMs, high-speed trading, and an unhealthy enthusiasm for surveillance devices that would put the Stasi to shame. Rather than fulfilling the ideals of Ted Nelson (1974) or Stewart Brand, [12] and freeing us from administrative responsibilities, software and ubiquitous computing have “turned us all into part or full-time administrators” (Graeber, 2015, p. 140). What could otherwise be “poetic technologies” have become “bureaucratic technologies” (2015, p. 141). Even more worrying is the fact that the humanistic panoply deployed by UCD, which in many ways contributed to widespread adoption of computing in the last decades, is serving to perpetuate not only this state of affairs but, as we will see next, a false sense of augmented agency. 4 | AGAINST A FALSE SENSE OF AUGMENTED AGENCY Surveillance capitalism and the type of technologies this system fosters are fundamentally bureaucratic. Bureaucracy is, by definition, arbitrary, inflexible, alienating, inefficient, and taxing (in the broad meaning of the word). Nonetheless, bureaucratic procedures are often justified by claiming that they will achieve precisely the contrary: that they will make procedures cheaper, expedite, transparent, and meritocratic. Bureaucratic procedures replace organic tête-à-tête negotiation and bargaining (which presuppose some form of symmetrical relationship between actors) with reductive, generalised imperatives (grounded on hierarchies and asymmetries) that follow the simple formulation “if, then, else”. Bureaucracy, like violence in general, is fundamentally unimaginative. Violence allows people to get away with arbitrary actions; to replace the negotiations and clarifications expected to occur within more egalitarian human exchanges with schematic imperatives. That is why no amount of rhetorical imaginative effort can efficiently counter the simplicity of a threat such as “cross the line and I will shoot you” (Graeber, 2015). Despite the above, to the best of our knowledge, humans have the unique capacity to picture things in their mind’s eye; consequently, we can project our ideas and ourselves into hypothetical past and future scenarios. This means that unless we have some damage in our frontal lobes, we can put ourselves in other people’s shoes and imagine what it would be like to stand in their position. This requires interpreting, understanding, and (to varying degrees) caring for other people, their circumstances, and needs. Bureaucracy, on the other hand, is imposed as a remedy against the above; as a substitute for the myriad complex exchanges and negotiations that people need to carry out when interacting with other people. Bureaucratic procedures are imposed to manage relationships that are already extremely unequal in terms of interpretative (empathic) labour. Bureaucracy embodies and institutionalises “lopsided structures of the imagination”. Thus, as Graeber CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 22 (2015) contends, bureaucracy is not so much an embodiment of stupidity but a way to manage circumstances that are stupid because they depend on pre-existing inequalities underpinned by structural violence. Power allows people to behave crassly towards other people. Those in a situation of power and privilege tend to unabashedly avoid engaging in imaginative identification, particularly towards people they see as their inferiors. Attempting to imagine how their subordinates feel is nothing short of a burden; after all, “in most ways, most of the time, power is all about what you don’t have to worry about, don’t have to know about, and don’t have to do” (2015, p. 101). Imaginative, emphatic, and caring labour is usually the responsibility of people serving those in the upper echelons of society. After all, servants are people who have to anticipate the needs, desires, whims, and moods of those in power; whereas in turn these “can wander about largely oblivious to much of what is going on around them” (2015, p. 81). The current trends in IoT consumer technologies seem focused on generalising precisely such bureaucratic/lopsided attitude towards the world. Only this time, the interpreting labour is carried out by smart devices. The emergence of various AI personal assistants (Alexa, Google Assistant, Siri, and Cortana) and smart environments (e.g., Kohler’s Numi 2.0 or Whirlpool’s smart kitchens) exemplify such trend. Smart, voice-controlled systems that unlock doors, regulate temperature and lights, play music or do our laundry simulate the kind of relationship plutocrats have with those around them. Tech companies such as Google, Amazon, and Apple are engaged in a cutthroat competition to make AI- powered devices that offer more people the sensation of having the world respond to their desires, that is, transforming the luxuries of the plutocracy into affordable necessities even for those in the lower strata of society. This form of privileged augmented agency [13] (having one’s whims satisfied by an artificial agent at the sleight of one’s hand), however, comes at a cost that far exceeds the benefits. These systems can potentially realise such (distorted) utopia not because they are outstandingly prescient but, as was earlier discussed, thanks to the massive amounts of data extracted from the very people they are sold to. For the plutocrat the underling is disposable and easily replaceable; regardless of how intimate or longstanding their relationship might have been, there is no doubt about who holds power. Conversely, the power (and liberty of choice) the average Joe has over his growing network of Alexa-controlled devices is, at best, illusory. What might seem as an innocuous indulgence is, in reality, a key element in the kind of Faustian pacts encouraged by surveillance capitalism. Using an Alexa-controlled musical toilet that lifts its cover and washes and dries one’s rear end grants Amazon the possibility of, say, creating a schedule of one’s bowel movements and selling it to data brokers for whatever purpose. As of today, IoT devices are not providing kinetic powers but rather a false consciousness sense of augmented agency. In the age of surveillance capitalism tailoring is tantamount to encroachment, the kind that only an extremely efficient spy can get off with. Surveillance capitalism and bureaucratic technologies are not only generalising a false sense of augmented agency; they are hampering human capacity to self-transformation. Previously, we saw that from the perspective of contemporary philosophy of technology, a defining feature of human beings is that we self-fabricate our lives, that we can regard our existence as a technical enterprise or rather, as a design process. Designing is an activity concerned with problem-solving, planning and projection, but its defining feature is iteration, a reliance on feedback loops. Machines and tools are objects ideally conceived “to defeat the world’s resistance” (Flusser, 1991/2014, p. 14), to overcome our “natural” limitations by augmenting or enhancing our physical or cognitive capacities. As intrinsically artificial creatures, we are the sum of the technological enhancements we choose to incorporate in our lives (Pitt, 2011), and these include everything from our means of transportation to our clothing and entertainment. In many ways, we are our technologies. As we progress in life, we experiment, we tinker with myriads of such choices, we test and see whether they fit in our lives and contribute or not to what we wish to become. Ideally, throughout this process, we make adjustments and corrections, we engage in a feedback loop, not unlike those characterising every design project. This formulation assumes that we have not only the capacity to choose but the means to evaluate our choices. However, in this age, this possibility is becoming the exception rather than the rule. Under the guise of customisation, bureaucratic technologies are curtailing human freedom of choice—all the while promoting the idea that we have too many choices, as the “FOBO” phenomenon suggests (Reagle, 2015). Through the appropriation of users’ behavioural data—which companies regard as a free-range resource “for the taking”—to create tradable “prediction products” (Zuboff, 2019), most tech companies relying on Big Data aim to manipulate user’s decisions. As noted earlier, the economic model pioneered by Google has been adopted not only by FAANGS, but by virtually every other company selling insurance, healthcare, retail, entertainment, education, finance, and other services (2019). As Zuboff suggests, any IoT product currently labelled as “smart” is either already playing a role in the behavioural data supply-chain or is capable of doing so. Major companies and data brokers are continually working their way to circumvent obstacles against data collection, including users’ explicit rejection; for example, by gathering inference data from public sources and users’ unrelated activities— CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 23 particularly from so-called “data exhaust”. Under such conditions, users do not have access nor control over their behavioural data and how it is being interpreted and for which ends. As Zuboff points out, we are not only being “exiled” from our own behaviour but from the “knowledge” it yields (Naughton, 2019). This circumstance highlights yet another divide characterising surveillance capitalism: an asymmetric relationship between those who know (but hide behind inscrutable and often dangerously biased “algorithms”) and those who are known and for whom privacy is no longer a right but a luxury. The type of social relations that emerge within surveillance capitalism resemble those of pre-modern absolutism where territory and rights are concentrated on a single entity. Big Tech is rapidly depriving people of their liberty to choose which information about their lives remains undisclosed. What these organisations are accumulating is not only “surveillance assets”—i.e., data and the technical means to handle it—and capital, but also human rights (Naughton, 2019). This appropriation and exploitation of informational resources also have a territorial dimension. Big Tech has claimed ownership of the “Infosphere”, the expanding environment inhabited by humans and other informational agents resulting from the merger of our physical world and our “onlife” (Floridi, 2005, 2007). For Zuboff (2015), this appropriation echoes the plundering of the so-called New World started half a millennia ago. [14] For Big Tech, it is no longer enough to automate information flows about their users; nowadays, they seek to surreptitiously automate their behaviour too by manipulating their contexts. (Naughton, 2019). As the world becomes more enveloped by smart devices, we find ourselves continuously ambushed by nudges [15]. Having our decisions micromanaged and our choices engineered means we are being denied access to our own experiential feedback loop. This in turn means that our capacity to design ourselves, to tinker and explore other possibilities of being-with-technologies is severely hampered. In these days and age, we are our information and what becomes of it (Floridi, 2014). Since technologies hold such power over what we are, it is vital to imagine new ways in which we can establish a healthier, freer relationship with them. However, first we ought to question and challenge the inevitability that surveillance capitalism has attached to information technologies (ITs). Surveillance capitalism and protection of privacy are fundamentally incompatible, and surely neither total bureaucratisation nor surveillance capitalism could have existed without ITs, but it does not follow that ITs are to blame for their existence. The emergence of services designed with privacy in mind is proof there is an economical alternative to Google’s “free” surveillance assets. Bureaucratic thinking (purely concerned with procedures) has taken over our relationship with technologies, and hence over our capacity to change according to our own designs. In our daily struggle to work out our existence (and hence to transform our environment) we now hardly ever question the motives for incorporating a given technology into our lives. Designers, even those subscribing to UCD, are no longer questioning the ontological and deontological dimensions of technologies but are merely caring about the methodological aspects of it. To paraphrase Flusser (2014a), they are leaving aside the “what’s” and “why’s” of our technological needs to focus solely on the “how’s”. As it is currently being imagined and instantiated, automation is not one that brings more leisure time, a healthier, safer, environmentally sound, and egalitarian society where human autonomy is nurtured for good. Arguably, then, we need some way to challenge this status quo; to reclaim the role of our technologies from bureaucracy and turn them once again into poetic tools. To use bureaucracy and technologies “to bring wild, impossible fantasies to life” (Graeber, 2015, p. 141). What we are missing is ways to imagine different relationships with our technologies but also the means (concepts) to talk about our shifting circumstances. Whenever we confront the unknown, the first task we need to accomplish is naming, for “naming is the first step toward taming” (Naughton, 2019). That is why we need insights into the true nature of the phenomena that is changing our world in such radical ways. Criticisms such as Zuboff’s and Graeber’s provide a sociological framework for contextualising our critique, but something else is also missing: the capacity to imagine how things could be different. Technologically informed philosophical reflection understood as conceptual design (Floridi, 2013), might take us a long way into that objective. 5 | CONCLUSIONS AND FUTURE WORK Surveillance capitalism and the type of technologies that it is fostering are fundamentally bureaucratic. Current trends in IoT consumer services are generalising a false sense of augmented agency that mimics the lopsided, unimaginative and careless relationship plutocracies establish with the world around them. Promising to transform the seemingly innocuous laziness of the powerful into an accessible necessity, bureaucratic technologies are forcing people into nothing short of Faustian pacts. By incorporating systems designed to extract as much behavioural data from them to predict and influence their behaviour, users of bureaucratic technologies are surrendering their capacity to self-transform; to develop as individuals. Bureaucratic technologies are alienating users from their own experiences, from the uncertainties that help nurture constructive existential feedback loops. This, of course, is not the way such powerful technologies should operate. CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 24 Philosophical reflection, in general, can help to challenge this state of affairs by instilling doubts about notions that are taken for granted, but also by designing new conceptual tools to understand current technological changes. Philosophy of technology provides a critical framework embraces the multistable nature of technologies and recognises their role as fundamental components of human nature. This critical mindset should be used to question why AI, its current and future operational limitations and consequences are often misrepresented or deliberately obfuscated (e.g., by equating intelligence with capacity to predict) to promote dishonest forms of automation. Furthermore, we need to direct our criticism towards current practices in design. We need to challenge and update frameworks such as UCD and rethink what “user’s needs” mean in the age of smart automation, and what kind of trade-offs we are assuming in the name of supposedly unobtrusive devices. A genuinely humanistic UCD should make sure humans are kept on the loop. User experience should imply a richer instead of a narrower understanding of the world; it should mean more agency and autonomy, rather than nudging, and manipulation. A truly rich technologically mediated human experience should favour not bureaucracy, coercion, and control; but rather possibilities for creating and imagining new ways to be. The analysis here offered is currently at an initial stage and there is still much work to do. ACKNOWLEDGEMENTS The author would like to thank all the members of the xCoAx organising committee and the editors of this special issue for their thoughtful comments, guidance, and help to turn this article into readable form. ENDNOTES [1] The nefarious consequences of dishonest automation are particularly visible in Politics, as shown by a recent study on the influence of YouTube’s optimisation algorithm on political and health debates in Brazil (Fischer & Taub, 2019). [2] Graeber (2015) is referring to “the use of rational and technical means to bring wild fantasies to reality”. “Fantasies” here meaning building pyramids and transcontinental railroads or exploring space. [3] For a full description of the differences between historical and posthistorical societies see Floridi (2014 ch.1). Floridi’s concept is roughly equivalent to Flusser’s (1985/2011) conception of “posthistory”; for a thorough comparison see Galanos (2016). [4] Here, automation is being understood broadly as outsourcing physical or cognitive tasks to an artificial system (Danaher, 2018). [5] Datafication differs from digitisation, insofar as the latter merely involves turning an analogue signal into bits; whereas the former implies an explanatory or hermeneutical desire to quantify, record, and interpret phenomena. For a critical view on datafication, see van Dijck (2014). [6] Datasets may undergo various similar cycles. Once processed, a dataset may be re-packaged by data brokers, sold, further analysed, and then sold again (Zuboff, 2015). [7] Although there is no definitive consensual definition of Big Data, the term often serves as a shorthand for the combination of (a) technical infrastructure (information processing hardware) and (b) techniques for gathering, sorting, and querying vast amounts of data at high speeds. By adopting Big Data schemes and practices, organisations seek to discover new patterns in systems’ behaviours, distil them into “predictive analytics” and apply the resulting information onto new datasets (Yeung, 2016). [8] An alternative acronym, “GAFAM” places Microsoft instead of Netflix and Spotify. [9] For Heidegger (1954/1977a), technological rationality embodied in the notion of Gestell induces us to see everything in the world, including ourselves, as resources that can be put to the service of a technological system. This utilitarian mindset is dangerous because it “undermines our creative engagement with reality, alienates us from ourselves and each other, and leads to the destruction of our habitat” (Merwin, Wendland, & Hadjioannou, 2018, p. 1). [10] Here, “autonomy” is being understood in the broad sense of completing a given process from start to finish without the need of human intervention and oversight beyond the establishment of the initial course of action (Johnson & Verdicchio, 2017). Thus put, a bow and arrow, a windmill, and a self-driving car are autonomous. [11] As Ihde & Malafouris (2018) further note, “the kind of minds we have depend on the kind of tools we make and use”. This echoes Nietzsche’s alleged claim that “our tools are also working on our thoughts” (in Kittler, 1999). [12] In the late 1960s Brand, editor of the Whole Earth Catalogue (published between 1968 and 1972), and the most visible voice for the so-called “New Communalists” (Campbell-Kelly, Aspray, Ensmenger, & Yost, 2014), along with Nelson advocated widespread adoption of computational technology. Deeply influenced by Norbert Wiener, McLuhan, Buckminster Fuller and Vannevar Bush, Brand and Nelson regarded computers as powerful DIY tools for achieving personal liberty and happiness. CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 25 [13] Here, “agency” is broadly understood as the capacity of an agent to bring about specific changes in the world; this implies the agent can decide to act (or not), choose to do it in a certain way, and execute the action (Bunnin & Yu, 2009). [14] As Zuboff notes, now that surveillance capitalism has become more visible, the term “digital native” has gained bleakly ironic overtone. [15] As Yeung (2016) argues, since Big Data analytics can be updated and tweaked in real-time, they represent a far more powerful tool for behavioural tinkering, than the type of nudging originally advocated by Thaler & Sunstein (2008). REFERENCES Baek, E., Cagiltay, K., Boling, E., & Frick, T. (2007). User-centered design and development. In J. M. Spector, M. D. Merrill, J. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 659–670). New York; London: Lawrence Erlbaum Associates, Taylor & Francis Group. Bogost, I. (2012). Alien phenomenology, or what it’s like to be a thing. Minneapolis, Minnesota: University of Minnesota Press. Bohman, J. (2016). Critical theory. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Fall 2016). https://plato.stanford.edu/archives/fall2016/ent ries/critical-theory/; Metaphysics Research Lab, Stanford University. Brown, W. (2015). Undoing the demos: Neoliberalism’s stealth revolution. Brooklyn, New York: Zone Books (Distributed by the MIT Press). Bunnin, N., & Yu, J. (2009). The Blackwell dictionary of Western philosophy. Oxford: Wiley- Blackwell. Campbell-Kelly, M., Aspray, W., Ensmenger, N., & Yost, J. R. (2014). Computer. A history of the information machine (3rd ed.). Boulder, Colorado: Westview Press. Cudworth, E., & Hobden, S. (2014). Liberation for straw dogs? Old materialism, new materialism, and the challenge of an emancipatory posthumanism. Globalizations, 12(1), 134– 148. https://doi.org/10.1080/14747731.2014.9 71634 Danaher, J. (2018). Toward an ethics of AI assistants: An initial framework. Philosophy & Technology, 31(4), 629–653. https://doi.org/10.1007/s13347-018-0317-3 Fischer, M., & Taub, A. (2019, August 11). How YouTube radicalized Brazil. Retrieved 24 September 2019, from The New York Times website: https://www.nytimes.com/2019/08/11/world/a mericas/youtube-brazil.html Floridi, L. (2002). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4, 287– 304. https://doi.org/10.1023/a:1021342422699 Floridi, L. (2005). The ontological interpretation of informational privacy. Ethics and Information Technology, 7(4), 185–200. https://doi.org/10.1007/s10676-006-0001-7 Floridi, L. (2007). A look into the future impact of ICT on our lives. The Information Society, 23(1), 59–64. https://doi.org/10.1080/01972240601059094 Floridi, L. (2012). The road to the philosophy of information. In H. Demir (Ed.), Luciano Floridi’s philosophy of technology: Critical reflections (pp. 245–271). https://doi.org/10.1007/978-94- 007-4292-5_13 Floridi, L. (2013). Technology’s in-betweeness. Philosophy & Technology, 26(2), 111–115. https://doi.org/10.1007/s13347-013-0106-y Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford, UK: Oxford University Press. Floridi, L. (2019). Marketing as control of human interfaces and its political exploitation. Philosophy & Technology, 32(3), 379–388. https://doi.org/10.1007/s13347-019- 00374-7 Flusser, V. (2011). Into the universe of technical images (K. N. Hayles, M. Poster, & S. Weber, Eds.; N. A. Roth, Trans.). Minneapolis: University of Minnesota Press. (Original work published 1985) Flusser, V. (2014). Gestures (N. A. Roth, Trans.). Minneapolis: University of Minnesota Press. (Original work published 1991) Foster, J. B., & McChesney, R. W. (2014). Surveillance capitalism: Monopoly-finance capital, the military-industrial complex, and the digital age. Monthly Review, 66(03). Retrieved from https://monthlyreview.org/2014/07/01/surveilla nce-capitalism/ Galanos, V. (2016). Floridi/Flusser: Parallel lives in hyper/posthistory. In Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science): Vol. 375. Computing and philosophy: Selected papers from IACAP 2014 (pp. 229–243). https://doi.org/10.1007/978-3-319-23291-1_15 Galanos, V. (2017). The double meaning of ‘replacement’ and the moral value of human and nonhuman inforgs: Crossroads of philosophy of information and actor-network theory. In P. Arvola, T. Hintsanen, S. Kari, S. Kolehma, S. Luolin, & J. Sillanpää (Eds.), Improving quality of life through information. Proceedings of the xxv Bobcatsss Symposium (pp. 79–84). Tampere, Finland. Garber, M. (2014). These dolphins are using sea sponges as tools. The Atlantic. Retrieved from CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 26 https://www.theatlantic.com/technology/archiv e/2014/04/these-genius-dolphins-are-using- sea-sponges-as-tools/361168/ Gadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved from https://www.theguardian.com/news/2018 /mar/17/cambridge-analytica-facebook- influence-us-election Girardin, F. (2019, January 16). When automation bites back. Retrieved 10 September 2019, from Near Future Laboratory website: http://blog.nearfuturelaboratory.com/2019/01/ 16/when-automation-bites-back/ Graeber, D. (2012). Of flying cars and the declining rate of profit. The Baffler, (19). Retrieved from https://thebaffler.com/salvos/of-flying-cars- and-the-declining-rate-of-profit Graeber, D. (2015). The utopia of rules: On technology, stupidity, and the secret joys of bureaucracy. Brooklyn; London: Melville House. Heidegger, M. (1977a). The question concerning technology. In W. Lovitt (Trans.), The question concerning technology and other essays (pp. 3–35). New York; London: Garland Publishing. (Original work published 1954) Heidegger, M. (1977b). The question concerning technology and other essays (pp. xxxix, 182; W. Lovitt, Trans.). New York; London: Garland Publishing. (Original work published 1954) Ihde, D. (2009). Postphenomenology and technoscience: The peking university lectures (L. Langsdorf, Ed.). Albany, New York: SUNY Press. Ihde, D., & Malafouris, L. (2018). Homo faber revisited: Postphenomenology and material engagement theory. Philosophy & Technology, 32(2), 195–214. https://doi.org/10.1007/s13347-018-0321-7 Jacobs, I. F., Bayern, A. von, & Osvath, M. (2016). A novel tool-use mode in animals: New caledonian crows insert tools to transport objects. Animal Cognition, 19(6), 1249–1252. https://doi.org/10.1007/s10071-016-1016-z Johnson, D. G., & Verdicchio, M. (2017). Reframing AI discourse. Minds and Machines, 27(4), 575– 590. https://doi.org/10.1007/s11023-017- 9417-6 Keynes, J. M. (2011). Economic possibilities for our grandchildren. In Essays in persuasion (pp. 358–373). New York; London: W. W. Norton & Company. (Original work published 1930) Kittler, F. A. (1999). Gramophone, film, typewriter (T. Lenoir & H. U. Gumbrecht, Eds.; G. Winthrop- Young & M. Wutz, Trans.). California: Stanford University Press. Kurtis, L. (2019, August 13). Apple locked me out of its walled garden. It was a nightmare. Retrieved 4 October 2019, from https://qz.com/1683460/what-happens-to- your-itunes-account-when-apple-says-youve- committed-fraud/ Martinho-Truswell, A. (2018, February 13). To automate is human. Retrieved 24 September 2019, from Aeon website: https://aeon.co/essays/the-offloading-ape-the- human-is-the-beast-that-automates Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Boston; New York: Houghton Mifflin Harcourt. McCoy, D. E., Schiestl, M., Neilands, P., Hassall, R., Gray, R. D., & Taylor, A. H. (2019). New caledonian crows behave optimistically after using tools. Current Biology, 29(16), 2737– 2742.e3. https://doi.org/10.1016/j.cub.2019.06.080 McNeil, J. (2018). Big brother’s blind spot. Retrieved from https://thebaffler.com/salvos/big- brothers-blind-spot-mcneil Merwin, C., Wendland, A. J., & Hadjioannou, C. (2018). Introduction: Heidegger’s thinking through technology. In A. J. Wendland, C. Merwin, & C. Hadjioannou (Eds.), Heidegger on technology (pp. 1–12). Retrieved from https://books.google.pt/books?id=1QlrDwAAQ BAJ Mumford, L. (1967). The myth of the machine: Technics and human development. Harcourt, Brace & World. Naughton, J. (2019, January 20). ’The goal is to automate us’: Welcome to the age of surveillance capitalism. Retrieved 1 February 2019, from https://web.archive.org/web/20190201222516/ https://www.theguardian.com/technology/2019 /jan/20/shoshana-zuboff-age-of-surveillance- capitalism-google-facebook Nelson, T. H. (1974). Computer Lib / Dream Machines: New freedoms through computer screens a minority report (1st ed.) [Self published]. Sausalito, California. Norman, D. A., & Draper, S. W. (Eds.). (1986). User centered system design: New perspectives on human–computer interaction. New Jersey; London: Lawrence Erlbaum Associates. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown. Ortega y Gasset, J. (1964). Meditación de la técnica. In Obras completas: Vol. Vol. 5 (1933–1941) (6th ed., pp. 317–375). Madrid: Revista de Occidente. (Original work published 1939) Pitt, J. C. (2011). Doing philosophy of technology. https://doi.org/10.1007/978-94-007-0820-4 Reagle, J. (2015). Following the joneses: FOMO and conspicuous sociality. First Monday, 20(10). https://doi.org/10.5210/fm.v20i10.6064 Ruckenstein, M., & Granroth, J. (2019). Algorithms, advertising and the intimacy of CITAR Journal, Volume 11, No. 2 · Special Issue: xCoAx 2019 27 surveillance. Journal of Cultural Economy, 1– 13. https://doi.org/10.1080/17530350.2019.15 74866 Rosenberger, R. (2014). Multistability and the agency of mundane artifacts: From speed bumps to subway benches. Human Studies, 37(3), 369– 392. https://doi.org/10.1007/s10746-014- 9317-1 Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Retrieved from https://books.google.pt/books?id=cYdYngEAC AAJ Thompson, S. A. (2019, April 30). These ads think they know you. Retrieved October 22, 2019, from The New York Times website: https://www.nytimes.com/interactive/ 2019/04/30/opinion/privacy-targeted- advertising.html van Dijck, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208. Retrieved from https://ojs.library.queensu.ca/index.php/surveil lance-and-society/article/view/datafication Varghese, S. (2019, October 21). The junk science of emotion-recognition technology. Retrieved October 22, 2019, from The Outline website: https://theoutline.com/post/8118/junk -emotion-recognition- technology?zd=2&zi=xi4vmsgi Wallach, D., & Scholz, S. C. (2012). User-centered design: Why and how to put users first in software development. In A. Maedche, A. Botzenhardt, & L. Neer (Eds.), Software for people: Fundamentals, trends and best practices (pp. 11– 38). https://doi.org/10.1007/978-3-642-31371- 4_2 Weiser, M. (1999). The computer for the 21st century [Reprint]. ACM SIGMOBILE Mobile Computing and Communications Review, 3(3), 3–11. https://doi.org/10.1145/329124.329126 (Original work published 1991) Yeung, K. (2016). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118x.2016.11867 13 Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75– 89. https://doi.org/10.1057/jit.2015.5 Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs. Zwier, J., Blok, V., & Lemmens, P. (2016). Phenomenology and the empirical turn: A phenomenological analysis of postphenomenology. Philosophy & Technology, 29(4), 313–333. https://doi.org/10.1007/s13347-016-0221-7 BIOGRAPHICAL INFORMATION Rodrigo Hernández-Ramírez (b. 1982) is assistant professor and coordinator of the master programme in Interaction Design at IADE, Universidade Europeia. He is an integrated member of UNIDCOM/IADE, as well as member of CIEBA (FBAUL). His research interests stand at the intersection of philosophy of technology, design, and new media. He is particularly interested in human– technology relations and how technologies shape the way we understand the world and ourselves.