Algoethics and Algocracy: an existentive problem [part 1]

Luciano Ambrosini
14 min readApr 5, 2023

--

Algoethics & Algocracy 01 credit Luciano Ambrosini

Italian version here

Finally, I apologize to the reader for taking this translation license ;) I couldn’t find a precise Anglo-Saxon term to translate “esistentivo” from the Italian meaning, so I used “existentive”.

Invitation to abstraction
Abstraction
is a conceptual process frequently used in computational approaches, in which rules and general concepts become a method through the use and classification of specific topics, helping the designers systemise his work. In the continuation of this short article, I reflect with the reader trying to temporarily abstract myself from some cultural superstructures, but in practice I wish the reader to try to do the same (for himself), analyzing his techno-social condition and projecting himself into possible future scenarios, maybe why not sharing them. By eluding dystopian or utopian scenarios,

abstraction, albeit temporary, is to be understood as a preliminary heuristic exercise that trains one to search in the broadest possible meaning, not necessarily greedy for answers, but rather as a predisposition to find them.

An existentive problem
We are at the dawn of the fifth industrial revolution and the unstoppable progress in the field of AI worries academics, researchers but also men of faith, much fewer enthusiasts — but this type of concern has always been part of human history, nothing new. However, there is something new in the air this time is certain, the leap of thought that transforms the certainty of a moment before into a doubt of the moment after. It is like Socrates’ concept of time, we are in the present, at the moment in which a new paradigm shift is manifesting itself, but the evidence is still evanescent, we can focus on the past, therefore its origin, but as the past is no longer, we can formulate hypotheses about the probable future, we can imagine it as a projection, but it is not being yet. However in this precise present, and it is this personal consideration of mine that constantly urges me to distance myself from false prophets, we do not have the ability to focus on this projection or at least to strip it of our innermost fears and desires. Only recently has man begun to wonder how the present, which is perceptible, tangible, and therefore palpable, can separate two things that do not exist, and how is it possible to predict or shape what will be.

This thought arises punctually in the history of technological progress, let’s call it the “point of uncertainty”, nothing to do with the Kurzweilian singularity, if anything it precedes it, or that point at which some of the capabilities are eloquently questioned human beings deemed specifically the domain of homo sapiens. In particular, and with increasing clarity from April 2022, through art, design and artistic illustrations, we have reached the hearts of the masses. If to the few I talk about LTGMs technology that has turned into a mainstream phenomenon, to the most I mention DALL-E, Stable Diffusion and Midjourney so we can quickly understand what I’m referring to. In historical terms, one of the most blatant points of uncertainty was that of 1779, well known to the British worker Ned Ludd (probably a fictional character), who externalized with his actions the concretization of man’s innate fear, i.e. that of being replaced, an episode that had a tragic repercussion on that present working class. In our 2023 this time the same uncertainty that I would more cautiously identify as cyber-permutation, or simply man’s replacement in some of his work tasks (functions) through the unimagined computational capacity achieved by machines — this is how doubt takes shape, or perhaps I should say the word takes shape, captivated by technology Generative Pre-trained Transformer (GPT).

Without being pointed out as fanatics or fans of Asimov, the Wachowski brothers or Alvin Toffler etc., or being accused of techno sophism or anything else, none of this, the point of uncertainty simply demarcates itself from the natural tendency, indeed the human tendency, to simplify and reduce the efforts and sufferings that grip his daily life as entropic opposition or simple hedonistic need towards the bio-socio-economic system in which he lives. Exactly as happened for manufacturing looms, the locomotive, 3D printing, anthropomorphic robotic arms etc. silent we witnessed a process of permutation. Far be it from me to resort to cinematic substitutions of race set between the world of machines and the fight for the survival of the city of Zion, I think it is more appropriate to speak of permutations given that the existentive problem of man arises from the algorithm. Permutation in the mathematics of sets represents the one-to-one relationship that exists between all the elements of a set with the set itself. The action of being permuted means changing an initial condition, transferring an asset from one owner to another, or a person from one place to another. Simplifying, permutation can mean a re-evaluation of one’s cultural boundaries, of one’s function in a broader context. The technological progress sublimated in AI represents a new element of our whole, a tool as a product and usable by man to whom he is linked. If memory serves me correctly, this is a concept already exhaustively addressed by Aristotle in his Nicomachean Ethics. In practice, Aristotle believed that what is produced is linked to the producer through the concept of causality. In his philosophy, there are four types of causality: material, formal, efficient and scope. I believe that the last two are more appropriate in this context, namely the efficient one and the final one. Efficient causality refers to the agent or force that causes a thing to exist or change. Scope causation refers to the ultimate purpose or goal of a thing. In both cases we find the active presence of man, in both cases, it is the man who shapes (trivially I would say he reduces the world into databases/datasets), but above all, it is the man who gives AI a purpose or in more appropriately I would say a function. I believe that the conversion of a function into purpose is mainly dictated by the foresight of human faculties but that it is equally difficult to evaluate it in time intervals based on the duration of a man’s life. So regarding the question are in the presence of sentient AI (AGI) my opinion is negative, however, I cannot exclude that the foundations are already being laid, right now for its outsourcing, it is as if we were witnessing the birth of the cocoon but of which we still know if it will become a butterfly or not.

In this present it is important to define in a forward-looking way the efficiency and purpose of this great human work called AI

The fears related to human substitution by AI, however legitimate in emotional terms, are here in my reflection removed because speaking of permutation the axis of intrinsically hermeneutical reasoning has shifted, to quote Heidegger, not on issues relating to existentialism but rather to existentive questions. While the former have individual and collective human existence as a cornerstone of contemplation, leaving room for more or less intense religious, humanistic drifts or in any case at times with pessimistic or optimistic demarcations, for example, a few weeks ago the news of a sect, Theta Noir [1] who worships AI just to name one, but I find it more of a publicity stunt; the latter focus on man’s ability to interact with the concrete possibilities presented to him in the world. It is as if in that hypothetical graph of technological progress in correspondence with the point of uncertainty it was still the man who established the direction of continuation of the same, in short, the directional derivative. Realizing awareness and perception of opportunities and above all understanding how and to whom to extend these opportunities should be the first step towards a Post-Digital Era or, if you prefer, completely just in time.

What are algorithmic and algocracy?
Let’s start by clarifying the meaning, both terms are reported by the Accademia dell Crusca as neologisms.

With the term algoethics the study of the problems and ethical implications associated with the application of algorithms is described [2]

While:

With the term algocracy, a digital network environment is described in which power is exercised in an increasingly profound way by algorithms, i.e. the computer programs that are the basis of media platforms, which make possible some forms of interaction and organization and hinder others [3]

The first term began circulating in 2018 through the scholar’s writings Paolo Benanti [4] Roman theologian, Franciscan of the Third Order Regular — TOR — and deals with ethics, bioethics and ethics of technologies. In particular, his studies focus on the management of innovation: the internet and the impact of the Digital Age, biotechnologies for human improvement and biosafety, neurosciences and neurotechnologies. I will later call him into question in this regard. The second term is used for the first time by the scholar Alexander Delfanti [5] and it established itself between 2018 and 2020.

The aforementioned uncertainty can be considered the daughter of algocracy, in the sense that in the last decade, man has already relied on algorithms, computer solvers, for the performance of certain tasks even in the legal and welfare fields, delegating the ability to provide or manage the possibilities of people to them. The act of unconditionally delegating historically named Systems Sapiens before the term artificial intelligence took hold in the collective imagination as an advertising slogan, finally raised a crisis of limits affecting the identity of the homo sapiens. Uncertainty has been structured around the thin thread of human conscience that links ethical aspects to the values ​​of equality, unfortunately passing through the utilitarian economic ones (because we are all part of the same market). The criticality is that if the spectre of profit, of the pioneering conquest of this or that other market segment, moves the threads of uncertainty, this race to arm ourselves with AI will turn into a problem, into a crisis of opportunity and soon or late in an ethical-social emergency (it is already happening). Compared to the past, the problem is not one of access to resources, which is always a problem of an oligarchic nature tending towards monopolism, but one of access to knowledge and the identification and recognition of common values. Common to whom? But above all who decides what is knowledge and what is not? the rhetorical question is not trivial simply because neural networks are not trained with infinite databases and, even if large, they represent a reduction of the world itself, so who decides which reduction is the best? This reminds me of the legend of St. Augustine of Hippo, and of the child who wanted to fill the hole in the sand with all the water from the sea using a simple bucket — is this perhaps what we are unknowingly trying to do? With the advent of AI, it has been ascertained, or at least it is my impression, that knowledge as well as being reducible to a database, has also been downgraded to a utilitarian fact, that is, expandable and disseminatable only if it coincides with the main purpose for which it is been made available according to logic build fast and ask questions later — in this way the pioneering AI organizations have fulfilled the purpose in a utilitarian and profitable way. In the second half of the 18th century, it was Jeremy Bentham, this is the name of the radical theorist and politician in the Anglo-American philosophy of law, who formalized the utilitarian theory. Just for clarity, according to this theory what is useful is what results in the greatest happiness of the greatest number of people. The philosopher argued that a typical need of all utilitarians can be traced back, such as that of making ethics an exact science like mathematics: a rigorous hedonism based on the calculation of the quantitative difference between pleasures (recites Treccani, authoritative dictionary of the Italian language). Now, if access to knowledge in the broadest sense is delegated to restricted circles of men, or as would happen to hi-tech groups, it goes without saying that the next step will be to try to make ethics computable because it is assumable to an exact science and with a quick abstraction we would find ourselves the “producer” judged by the “thing produced”.

I believe that the trigger of the point of uncertainty derives precisely from this concept subtly lavished by algocracy, i.e., the excessive trust placed in the presumed infallibility of the synthetic and synthesized judgment in calculation and in the possibility of considering human activity completely computable

The more this concept spreads in the mainstream, the more attempts are made to physically remove it from the human orbit and the more the foundations for AGI solidify because it can be collectable in an entity that is no longer abstract.

In one of his articles [6] on the subject, the researcher and friar Paolo Benanti affirms a concept as simple as it is the bearer of beauty, a universally recognized moral value:

Life is an existence, it is not a functioning. With this joke we can enclose a problem that starts from Chomsky and from the idea of ​​the mind as a machine that works, and reaches up to the present day. It is evident that here there is another threshold problem between the machine and us, between the idea of ​​a mechanism that describes everything on the one hand and, on the other, the concept of person, a concept that has been possibility itself of existence of the West. Today this concept seems, to some extent, to dissolve behind an idea of ​​a human being that works.

Benanti thus inaugurates a new human chapter on the subject of bioethics and technology: algoethics. In practice, you have to imagine it as a sort of guardrail which helps to keep the machine (A/N the progress of AI) within certain margins, but I could say of moral values. Of course, it could happen that during the journey a lane jump, or a bad accident, could occur but basically, it is as if it were taken into due account (a bit like the concept of percentile), the researcher defines it as the threshold of ethical attention. If the West is not yet willing to scrap its cultural history of the Greek foundation to the religion of animism (this may be why robotics in Japan is at a higher level in terms of acceptance in everyday life) then AI, the machine is what is most remotely comparable to the concept of person and the management of the ethical threshold cannot be an object of interest of the technician, of the ethical engineer alone. In some passages of my doctoral thesis, mainly addressed to computational thinking, I turned to the attention that must be returned to the proxemics and which in this disquisition I feel like proposing again since technological choices have always had a driving force for social development — Benanti recalls in the article the historic episode of Jones Beach and the bridges of the white middle class as a guiding parable.

Algoethics & Algocracy 02 — LA

In short, to use Benanti’s words once again, the fundamental question is not technological but rather of an ethical-philosophical type, as he reports in the article in L’Osservatore Romano [7]:

to the extent that we want to entrust human skills, understanding, judgment and autonomy of action to AI software systems we must understand the value, in terms of knowledge and ability to act, of these systems that claim to be intelligent and cognitive

A bit as it happens for pharmaceutical products, the need to introduce a sort of leaflet could in some way facilitate this investigation and approach the understanding of the costs and benefits that can derive from the use of Machine Sapiens. In practice, in part, an algoethics approach is already in place in my opinion, the most attentive AI reader and user will have noticed the introduction, from a certain moment on, of some output control devices in ChatGPT and subsequently of Bing (after the partnership between Microsoft and OpenAI), of human feedback to correct a series of answers for which the AI ​​has shown a sort of uncertainty. In practice, it is an extra type of training human-centred.
The Churches of the main Abrahamic religions also want to access knowledge, or rather the writing of algorithms whose root is guided by ethical principles. It all happened on January 10, 2023, in the Rome Call: Artificial intelligence ethics [8] as reported by the Microsoft press release in which the monotheistic Churches team up with Microsoft and IBM in writing algorithms and collectively establishing what is good for the people. Chief Rabbi Eliezer Simha Weisz, a former member of the Council of the Chief Rabbinate of Israel, offers a disturbing comparison, or transitive property if you prefer:

Judaism exalts the wisdom of humanity, created in the image and likeness of God, which manifests itself in human innovation in general and in artificial intelligence

in particular, Benanti’s position is particularly shared at the Rome Call for AI Ethics and more generally by RenAIssance Foundation [9] of which he is a co-founder:

“We need to establish a language that can translate moral values ​​into something computable for the machine”

A statement that makes researchers turn up their noses, for example, such as Francesco Vannini, an anthropologist with a degree in political science and former director of the MIT Sloan Management Review Italia magazine since 2022, the Italian edition of the Journal of the Massachusetts Institute of Technology (MIT) business school. Vannini is very clear in his analysis which he articulates in three passages [10] that warn us against errors that could prove to be irreversible. First, trying to make ethics something computable is made the domain of the expert to make it digestible to the machine; Second, we open up to the moral autonomy of the machine (read the management of human values); Third, since the machine can compute more cleverly than man, then it will be more moral than man.

In this last statement, Vannini is clear, there are important data scientists, including Judea Pearl of Jewish culture winner of the prestigious Touring Award in 2011, who considers it possible to reduce human thought according to a causal approach, Ladder of Causation, consisting of three levels such as:

  • Association, regularity in observations, and predictions based on passive observations. Correlation or regression;
  • Intervention, Pay attention to what cannot be present in the data (which concerns the past);
  • Counterfactuals, Compare the factual world with a fictional world;

Vannini, however, recognizes in Pearl a STEM forcing that tends towards the reductionism of human thought he states:

In short, if multidisciplinarity and complexity are accepted, the attempt to reduce human thought, human intelligence, and human wisdom to the three rungs of the Ladder of Causation ends up appearing to us as a trivial exercise in reductionism. A way to cage thought, more than a way to grasp its meaning.

Algoethics & Algocracy 03 — LA

This is because in Pearl’s book titled The Book of Why, he knowingly wants to structure an inference process of the machine such as to be deliberately superior to the man himself. Pearl hopes for the possibility of learning from what man should and could create. Sapiens-Machines capable of distinguishing good from evil better than man, in any case, his opinion on the question of whether we are already in the presence of sentient systems his answer is negative, but he underlines that his approach (A/N that of moral intelligence) instead of that of Touring of 1950, or Asimov’s 3 science fiction rules, then yes, of course, we would get to AGI without a doubt.
Vanni delves into various logical steps, but I’d like to close this part with a question of his that I extend to the reader: Why prefer machines to ourselves?

PART 2 Coming soon…

References

[1] Source wired.it, https://www.wired.it/article/intelligenza-artificiale-religione-setta .

[2] Source Accademia della Crusca, https://accademiadellacrusca.it/it/parole-nuove/algoretica/18479 .

[3] Source Accademia della Crusca, https://accademiadellacrusca.it/it/parole-nuove/algocrazia/18478 .

[4] Paolo Benanti, Oracles. Between algoethics and algocracy, Rome, Luca Sossella publisher, 2018.

[5] Alessandro Delfanti, Adam Arvidsson, Introduzione ai media digitali, Bologna, Il Mulino, 2013, p. 23.

[6] P. Benanti, The frontiers of humanity: the dilemmas of algorithmics. Interview with Paolo Benanti, https://tinyurl.com/djhus4ps .

[7] In response to Pope Francis’ video message on the occasion of the IV World Meeting of Popular Movements, 16 October 2021 — P. Benanti, “The need for an algorithm”, https://tinyurl.com/PBOssRoman .

[8] Source, https://tinyurl.com/MSRomeCall .

[9] More info, https://www.romecall.org/ .

[10] F. Vannini, https://tinyurl.com/FV10perle .

--

--

Luciano Ambrosini
Luciano Ambrosini

Written by Luciano Ambrosini

PhD | Architect | Computational + Environmental Designer

No responses yet