Algorithms are the DNA of most modern and autonomous computational systems. They are “the mind of the autonomous system,” and as such, they allow for different hardware and software to learn, detect, and adapt to new digital contexts (London, 4691). Their genetic intelligence makes algorithms both generative and receptive parts of a digital architecture or ecosystem, and their ontology is such that they demand reciprocity. Facing the emergent complexities of this reciprocal relationship between autonomous technologies and their environment, scholars and practitioners alike are careful to couch the ethical problem of algorithms within the larger social context of network relations and value-systems. From the perspective of the readings, society and its principles still shape the reality and outcomes of AI. However, today’s algorithmic reality proves that in principles, ambiguities abound around their implementation, interoperability, relevance, and overall actionability. We might cut through this ambiguity by re-positioning control at the centre of the issue of AI; and this can be achieved by elevating reciprocity as the first-principle framework for all value-based approaches to AI.
Behind every algorithm there are sets of principles. This much is reflected in the scholarship that details the sets of principles needed for algorithm/AI design and regulation. In Mozilla’s “Creating Trustworthy AI,” landscape scans of frameworks for the future of machine ethics suggest “global convergence” around the principles of “transparency, fairness, and human well-being” (26). The findings of “The Moral Machine” by Awad et al., support this surprising convergence. After conducting a global survey on the topic of the dilemma-situation posed by autonomous vehicles, the report identifies three, universally stable “preferences” of outcome: “the preference for sparing human lives…sparing more lives…[and] for sparing young lives” (63). While noting statistically significant variation on individual and country/cultural levels, the variations were not substantive enough to preclude universal consensus around these three principles, a minimal value-system, for autonomous vehicles and machine ethics.
Professor Alex London of Carnegie Mellon University forwards a different view of comparative value-systems. Despite tying machine bias to the social embeddedness of algorithms, Prof. London does not think principles can generate broad enough consensus and accountability around AI regulation and machine ethics: “diverse societies exhibit significant variation in both immediate and high-order relevant values” (4695). As such, moral relativity poses a significant barrier to any systematic attempt to develop a global framework for ethical AI design and regulation. For Prof. London, bias is deviation from standard, and when this deviation is the product of machine bias it can be deeply problematic. However, he challenges us to think also about when deviations in machine bias become “valuable components of a reliable and ethically desirable system” (4692). Put simply, there is the potential for algorithmic bias to correct other harmful forms of bias, those of machine or of man.
Prof. London situates the critique of algorithmic bias within the broader ethical critique of the social context in which it always emerges. Algorithms do not originate in a vacuum; rather, they tend to reflect the social mores and value-systems of a given cybernetic ecosystem by offence or reification. As an ethical space, the ecosystem precedes an algorithm and the impacts of its autonomous learning and training. Thus, the true paradox of autonomous technologies and machine bias is that they become parallax, digital objects: they do not only hold up the mirror to the ecosystem and its actors – but they look back. The question remains: how can we establish or incentivize ethical consensus around a technology whose esoteric operations exceed common sense, and whose scale traverses every nook and cranny of the globe? Entertaining again the findings of “The Moral Machine,” there is something still worth considering.
At its conclusion, the report from Awad et al. states that beyond the “conflicts, disagreements, and dissimilarities” expressed in the minutia of its data, there exists the possibility for a common ethics when faced with the dilemma of autonomous learning and training. The key to founding this kind of ethics is couched in the language of the survey: “preference”. And what are ethics, value-systems even, if not a willingness to draw and re-draw individual and collective preference? This notional ethics does not – cannot – exist without the furnishings of a properly political space that harbours plurality, and more precisely, reciprocity.
The summarizing goal of Mark Surman’s manifestos is the development of AI that empowers people (10). This type of AI is unequivocally built around a constitutional, conscientious accepting of human responsibility within cybernetic ecosystems. It too, is built around the concept that “data we create while interacting in the digital world is… ‘ours’ or under our control.” Bracketing for now whether ownership and control are interchangeable, the two closely related concepts conjure not only the abstract freedom of being able to choose how to engage with the digital world, but also the sense that this level of choice, of control, is preferential. And to what? Well, to not having the choice; to not having the right to choose and so prefer control. Crucially, this zero-pointing preference for control implies an even more original reciprocity between the structure of the cybernetic ecosystem, and the actors that create it and find themselves within its bounds. Considering the context of today’s platform capitalism, institutional AI, and the associative non-choice of prescriptive technologies that dominate the digital public sphere, the obviousness of preference for choice, and the reciprocity between actor and structure it demands, should not be overlooked.
Surman quotes Ursula Franklin, who greatly informs his thought: “[reciprocity] is neither designed into the system nor is it predictable” (11). Both Surman and Franklin aver reciprocity as a principle that is a necessary, extimate part of the process of algorithm design and function.[1] As such, this reciprocity is not the same organic, automated give-and-take that occurs between an algorithm and its environment, and it is by no means guaranteed. Instead, reciprocity in this context is similar to what Hannah Arendt called “the right to have rights”: reciprocity is a principle that frames all other principles as achievements of a genuine politics. Extending reciprocity in this way to the automated, digital realm of algorithmic technologies and AI, we can see that as a principle, it is not something given; rather, reciprocity must be re-asserted constantly in our willingness to prefer choice and control, again and again. The “movement-building” that Surman references as part of the Mozilla plan for the future of AI building and regulation is both more simple and more complex than boiler-plate prescriptions: control is always a matter of choice, when we collectively realize and organize around the fundamental reciprocity of the digital commons.[2] But this requires first, a willingness to assume the a priori preference for such a choice.
[1] “Extimacy” is borrowed from Jacque Lacan, who describes an extimate object as something existing both “inside” and “outside”. The word neatly problematizes spatial opposition and, in the case of reciprocity, describes the simultaneous presence of the principle as an externality and as the internal movement constitutive of the cybernetic ecosystem (https://nosubject.com/Extimacy).
[2] Ricks, Becca and Mark Surman. “Creating Trustworthy AI: a Mozilla white paper on the challenges and opportunities in the AI era,” Mozilla Insights, 2020; (6).
Taken together, the two works from Surman revive reciprocity as the principle upon which the preferences for agency, accountability, and control over what has hitherto been deemed out of our control, are found. Tautologies aside, it is helpful to remember that science fiction is science-fiction. The “debate” should no longer be about whether we have control over autonomous technologies and the people and processes that create them. We do.
Thus, the core antagonism to the problem of control is a question for philosophy and psychoanalysis: do we want control? If so, what would we do with it? Alternatively, do we enjoy “too much” (excessively) deficient reciprocity, the exploitation and hyper-expropriation of surplus-value, and the lack of control characteristic of today’s platform economy? If we consider enjoyment or how we collectively enjoy as a political phenomenon, then the task of today’s politics necessarily becomes an earnest re-evaluation of preference. Unless of course, we “prefer not to.” Finally, value-creation and value-systems are not mutually exclusive phenomena. In fact, it is best that we begin to understand their relationship as reciprocal.
Works Cited:
Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schultz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon and Iyad Rahwan, “The Moral Machine experiment,” Nature 563, no 7729 (2018): 59-64.
Danks, David, and Alex John London. “Algorithmic Bias in Autonomous Systems.” Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017. https://doi.org/10.24963/ijcai.2017/654.
No Subject - Encyclopedia of Psychoanalysis. “Extimacy - No Subject - Encyclopedia of Psychoanalysis - Encyclopedia of Lacanian Psychoanalysis.” No Subject - Encyclopedia of Psychoanalysis. No Subject - Encyclopedia of Psychoanalysis, May 24, 2019. https://nosubject.com/Extimacy.
Ricks, Becca and Mark Surman. “Creating Trustworthy AI: a Mozilla white paper on the challenges and opportunities in the AI era,” Mozilla Insights (2020).
Surman, Mark. “The Real World of AI.” (May, 2021): 1-13.
Probably goes without saying, but this was a piece for class. So, I hope you stick around past the embedded citations and the formatting. God Bless :)