Academia.eduAcademia.edu
AI Ethics: How Can Information Ethics Provide a Framework to Avoid Usual Conceptual Pitfalls? An Overview. (2020). AI & Society. Journal of Knowledge, Culture and Communication. This is a post-print of an article published in AI & Society. Journal of Knowledge, Culture and Communication. The final authenticated version is available online at: https://doi.org/10.1007/s00146-020-01077-w Frédérick BRUNEAULT frederick.bruneault@claurendeau.qc.ca Adjunct Professor, École des médias, Université du Québec à Montréal Philosophy Professor, Collège André-Laurendeau, Montréal, Qc, CANADA Andréane SABOURIN LAFLAMME andreane.sabourin-laflamme@claurendeau.qc.ca Philosophy Professor, Collège André-Laurendeau, Montréal, Qc, CANADA PhD student, Law, Université du Québec à Montréal Artificial intelligence (AI) plays an important role in current discussions on information and communication technologies (ICT) and new modes of algorithmic governance. It is an unavoidable dimension of what social mediations and modes of reproduction of our information societies will be in the future. While several works in artificial intelligence ethics (AIE) address ethical issues specific to certain areas of expertise, these ethical reflections often remain confined to narrow areas of application, without considering the global ethical issues in which they are embedded. We therefore propose to clarify the main approaches to AIE, their philosophical assumptions and the specific characteristics of each one of them, in order to identify the most promising approach to develop an ethical reflection on the deployment of AI in our societies, which is the one based on information ethics as proposed by Luciano Floridi. We will identify the most important features of that approach in order to highlight areas that need further investigation. Ethics; artificial intelligence; transhumanism; liberalism; information Artificial intelligence (AI) is an important part in current discussions on information and communication technologies (ICT) and new modes of algorithmic governance. Being at the forefront of technological innovations, AI, particularly machine learning and deep learning, is regularly presented as the essential dimension of what social mediations and modes of reproduction of our information societies will be in the future. In the face of these changes, which many call the digital revolution, it seems essential to ask ourselves what the consequences of this revolution will be and how we can make responsible use of AI. Initiatives, such as the Montreal Declaration for the Responsible Development of AI, seek to respond to the fact that many stakeholders from the technical, marketing and development sectors of AI are addressing ethical 2 issues specific to their fields of expertise, issues that have become unavoidable in all areas of application of these technologies. The problem is that these ethical considerations often remain confined to limited areas of application, without considering the global ethical issues in which they are embedded. Faced with the polysemy of the term 'ethics', many studies in AI Ethics (AIE) seek to address different levels of ethical reflection - professional ethics, applied ethics, normative ethics, and metaethics – very often without distinguishing them. Moreover, when a normative ethical approach is not simply presupposed, without further discussion, there is often an amalgam of normative ethical positions, which may be incompatible with each other. Although laudable in the face of the urgency to act imposed by the accelerated pace of development of AI systems and preferable to the absence of ethical considerations that still too often prevails in technical and economic circles, such approaches usually remain on the periphery of ethical discussions, without going back to the theoretical roots behind such positions, encouraging a certain confusion. Moreover, while the technical dimensions of the development of AI must of course be at the heart of a satisfactory ethical reflection on these technologies, these technical aspects alone can neither constitute a sufficient basis for the elaboration of an AIE, nor replace a philosophical approach that would make it possible to place such an AIE in the debates that have marked the history of philosophy. We will first (1) examine the argumentative resources mobilized in the AIE debate in order to offer an argumentative spectrum that makes it possible to organize the main positions defended, according to two typical positions at both ends of the spectrum, and by analyzing the philosophical presuppositions, characteristics and pitfalls specific to each of these positions. More specifically, we will (1.1) discuss the inflationary perspective, leading to what we will call the substantialist position, and then (1.2) the deflationary perspective, leading to what we will call the instrumentalist position. After (1.3) discussing the limits of this opposition and the ethical theoretical presuppositions associated with the argumentative spectrum, we will (2) examine the informationalist position defended by Luciano Floridi, which seeks to develop an ethical approach which is complementary to classical normative ethical theories. We will then be able to highlight 3 the advantages of this third path and indicate the possibilities that emerge from such a perspective with a view to developing further thinking on AIE. 1. The AIE debate We will use the distinction proposed by Jocelyn Maclure (2019) between inflationary and deflationary outlooks in order to define the argumentative spectrum for locating the different positions defended in the AIE debate. The inflationary perspective emphasises the long-term consequences of the development of AI by reflecting on issues and risks associated with the emergence of an artificial general intelligence (AGI). In contrast, the deflationary perspective seeks instead to think about the specific ethical issues that are currently associated with the development of real AI systems and their implementation in our societies. Far from being fixed categories that allow us to classify the different positions in the AIE debate once and for all, we rather believe that they are relational concepts, in the sense that, along the entire AIE argumentative spectrum, a particular position will be inflationary or deflationary compared to another position from which we evaluate it. For this reason, we believe it is necessary to go beyond this distinction, although it is useful, to identify the two ultimate positions at each end of the spectrum, thus defining the universe of possible positions in AIE (see Figure 1). We will therefore argue that at the far-end of inflationism is the substantialist position, making technological development in general and especially AI development a substantial reality, independent of human decisions and interactions, in short the thesis of technological determinism. At the other far-end of the spectrum, deflationism leads to the instrumentalist position, which stipulates that the development of AI, like that of technology in general, is a purely instrumental enterprise, instituted and controlled by humans, who can therefore influence it as they see fit, in short the thesis of technological neutrality. While most of the positions defended in AIE fall somewhere in-between these two extremes, it is nevertheless essential to understand the substantialist and instrumentalist positions, in order to identify their influence in the more nuanced positions. It should be noted that in this first part of our paper we will not use the usual distinctions in ethical theory between the deontological approach, the consequentialist approach 4 and the approach based on virtue ethics. We believe that these distinctions are inappropriate for capturing the argumentative differences in AIE, since it is not only possible, but in fact quite common, to find positions, although inspired by different theories, at similar places on the argumentative spectrum. To put it schematically, each position defended in the debate on AIE (P) consists of a component coming from ethical theories (E) and a component associated with a certain interpretation of the philosophy of technology (T), so that P = E + T. We believe that the decisive factor that determines the positioning of a specific view on AIE in the argumentative spectrum is the T component, much more than the E component, so that a totally different position in AIE (P') implies a different interpretation of the philosophy of technology, but it could very well be grounded on the same ethical approach: P' = E + T'. We will come back to classical normative ethical theories in 1.3. Figure 1 1.1 Inflationism and the substantialist position The inflationary perspective in AIE leads to what we call the substantialist position. This perspective has become increasingly popular over the last two decades and argues that the development of AI heralds the overcoming of the biological conditions of human existence. This advent of the next stage of human evolution is underway and societies must adapt to it. We will (1.1.1) see the ethical positions on AI related to the inflationary perspective, (1.1.2) the philosophical presuppositions of this perspective, and (1.1.3) the main characteristics of the substantialist position in AIE. 1.1.1 Inflationary ethical positions on AI The inflationary perspective in AIE argues that the emergence of AI is the necessary outcome of exponential technological development and is nothing less than the most significant event in the 5 evolution of the human species. One must say that the number of areas where AI can compete with human intelligence is increasing at a rapid pace and that advances in deep learning make many believe that a machine may be able to perform tasks that have hitherto been too intuitive to be artificially generated - as evidenced by the remarkable advances in translation and image recognition, for example. Proponents of the inflationary perspective share the belief that soon a Human Level Machine Intelligence (HLMI) will emerge, i.e. an AI that could “carry out most human professions at least as well as a typical human”. (Bostrom, 2014: 23). Given the undeniable advantages of the machine over the biological brain (including memory space and speed of information processing), the education of the first AGI will proceed at a rate that is out of all proportion to the education of a biological brain. Although many researchers are cautious about these forecasts, probably since many overly enthusiastic predictions were made, before the AI winter of the 1980s by their predecessors (advocates of the symbolic AI program often referred to as good old-fashioned artificial intelligence – GOFAI), proponents of the inflationary perspective in AIE still argue that the transition from weak AI to strong AI or AGI is nothing else than a necessity. The question is therefore not whether, but rather by which technological means, how fast, in what form and when a first AGI will emerge, whose development will be autonomous and thus independent of human intervention. Whether it is as part of the singularity (Kurzweil, 2007) or as an explosion of intelligence (Bostrom, 2014), this event will constitute a revolution as important as sedentarization or the industrial revolution, and will catapult humanity into a phase of its evolution in which humans will transcend the limits of biology to embrace a reality in which the frontier between the living and the non-living, as well as between the natural and the artificial, will be erased forever (Tegmark, 2017). Given the inevitability and radical nature of this profound mutation, some consider the possibility that this "superintelligence" may pose an existential risk to the human species (Russell, 2019; Bostrom, 2014). These concerns have been addressed in a famous open letter signed by some of the most influential players in the AI field who consider the risk of a global catastrophe associated with AI to be significant (https://futureoflife.org/ai-openletter/). Within the inflationary perspective in AIE, ethical reflections are, in the end, almost 6 exclusively oriented towards the conditions related to the emergence of an AGI and the risks associated with it. 1.1.2 Inflationary philosophical assumptions Although there is no consensus on the outcome of this revolution, and both dystopian narratives and utopian fables are written in this regard, it is clear that the positions that can be associated with the inflationary perspective in AIE are based on philosophical assumptions shared by transhumanism, an ideology that "recognizes and anticipates radical alterations in the conditions of our existence resulting from various sciences and technologies" (More, 1990: 6). There are debates within the inflationary perspective, particularly regarding the moral value of the paradigm shift that the emergence of an AGI will bring about. Indeed, while some authors who defend an inflationary posture call for the development of an AGI, others fear it. Often understood as a humanism that now has the means to achieve its ambitions, transhumanism conceives of technology as a tool that will enable humans to considerably increase their physical and cognitive capacities, freeing them from the biological constraints that limit them in their current form. Advances in robotics and nanotechnology give us hope that we can replace certain parts of our bodies with much more efficient artificial organs, since our biological body is highly imperfect, described as a "bloody mess of organic matter" by none other than Marvin Minsky, as reported by Turkle (2005). Not without recalling a virulent form of Cartesian idealism, transhumanism ultimately aspires to a complete liberation of the body, so that many seriously consider the possibility of downloading the content of our mind onto hardware other than the obsolete biological medium we call our brain. Humanity as we know it today would only be the first step in an evolution that we have largely undergone until now, but that we would soon be able to control. In this perspective, AI is the means par excellence to achieve this goal and the emergence of a strong AI could very well coincide with our transition to humanity 2.0 (Kurzweil, 2007) or to life 3.0 (Tegmark, 2017). 7 1.1.3 Characteristics of the substantialist position The inflationary perspective in AIE culminates in the substantialist position, which is based on a deterministic view of technology according to which technological development is necessary, inevitable and irreversible. The technological development conjuncture implies that "with continued scientific and technological development efforts, all relevant technologies will eventually be developed". (Bostrom, 2014: 282). Faced with the inevitable, we should therefore resolve to try to anticipate the potential risks of an AGI for humanity and to anticipate the means by which humans could be protected. It is nevertheless with a certain urgency that we must tackle the problem of control (Bostrom, 2014), because while strong AI will undoubtedly be able to solve, or at least help us solve, several problems that seem insoluble today (climate change, poverty, disease, etc.), the main challenge related to the emergence of a non-biological intelligence, radically superior to human intelligence and capable of improving itself, is to ensure that the actions of this strong AI are in line with the interests of humanity and correspond to human values. Insofar as it is accepted that these mutations could lead us both to our salvation and our loss, the question that occupies the substantialists - a question whose sensational nature may explain the attention it has received in the media in recent years - is whether it is legitimate to give in to a certain panic in the face of the dangers linked to the emergence of a strong AI or whether it would not, on the contrary, be the means par excellence to free us from the constraints that have hitherto characterized the human condition. The paradigm in which the substantialist argument is embedded is consequently the one we propose to call "panic or deliverance" (PoD). Considering either the potential existential risks or the fabulous benefits of AGI for humanity, the substantialist argument in AIE evaluates the deployment of AI from this PoD perspective. 1.2 Deflationism and the instrumentalist position The deflationary perspective is characterized, at first glance, by an approach that is diametrically opposed to that of the inflationary perspective in the AIE debate. Indeed, deflationists argue that 8 the real ethical concerns related to AI lie only in the use that is made of it, since the GOFAI program has been abandoned and strong AI is therefore not a technical possibility (for a recent reflection on the GOFAI, see Levesque, 2017). Technological innovations related to AI, as they are currently developed, are more akin to weak AI, which means that the fears raised and the consequences foreseen by the inflationary perspective in AIE are not justified. The deflationary perspective, which tends towards the instrumentalist position, maintains that there are indeed ethical issues related to these technological innovations, but of a completely different nature. We will now (1.2.1) discuss the ethical positions on AI defended from the deflationary perspective, (1.2.2) the philosophical presuppositions behind this perspective, and (1.2.3) the characteristics of the instrumentalist position in AIE. 1.2.1 Deflationary ethical positions on AI Ethical positions on AI associated with the deflationary perspective are essentially based on the observation that AGI does not exist and will not exist in a foreseeable future. After the AI winter of the 1980s, much of the renewal and renaissance of AI programs have been possible essentially since the idea of reproducing the human mind in silico has simply been abandoned (Dignum, 2019; Domingos, 2015; Floridi, 1999). AI, as it is developed today, merely automates processes that were once exclusive to the implementation of human cognitive processes, but can now be accomplished by machines, given the exponential development of ICT. AIE must therefore, in this perspective, take note of this characteristic of AI development and assess the ethical issues related to the deployment of weak AI. This position characterizes, to varying degrees, the work of some digital and AIE theorists (e.g. Johnson and Verdicchio, 2017a, 2017b) and is clearly defended, for example, in the deflationary perspective in AIE of Jocelyn Maclure (Maclure, 2019; Maclure & Saint-Pierre, 2018). If the renewal of AI is based on the development of weak AI, it follows that speculation about the emergence of AGI is not warranted, since, as Maclure says, "the prospect of AI-enabled machines dominating the world and annihilating humanity seems too unlikely to be central to ethical and 9 legal thinking" (Maclure & Saint-Pierre, 2018: 752, our translation). It is this same deflationary position in AIE that is implied in the efforts to provide guidelines for professional ethics to govern the development of AI (Boddington, 2017), such as the Montreal Declaration for the Responsible Development of Artificial Intelligence (declarationmontreal-iaresponsable.com) and the 84 initiatives to establish ethical guidelines for AI listed by Jobin, Ienca and Vayena (2019). This AIE perspective is also implied in an approach such as the MIT Moral Machine which “gathered 40 million decisions, in 10 languages, from millions of people in 233 countries and territories" (Awad, 2018: 59) on how autonomous cars should behave in certain situations where a collision is unavoidable, taking into account that the people affected by the collision would have different socio-economic characteristics (a rethought version of the famous trolley problem). The position defended in AIE by Yoshua Bengio and Sascha Luccioni (2019), Joanna J. Bryson (2019) and Martin Gibert (2020) also echo the deflationary perspective. The idea is therefore that regulatory conditions must be put in place in order to limit, or even prevent, the misuse of AI, which would solve the ethical problems caused by its deployment. 1.2.2 Deflationary philosophical assumptions The philosophical assumptions behind the deflationary perspective in AIE are at least to some extent the main ideas of philosophical liberalism, i.e. deflationism in AIE is generally a right-based approach associated with the defence of liberal democracy. AI is thus conceived as an additional element to be considered in the application of the liberal framework. Such a perspective aims to bring AI into line with the principles of liberal societies (Hunyadi, 2018), with an emphasis on the development of strategies for the application of duties related to individual ethics, the idea being that AI, despite its innovative aspects and its exponential dimension, which raises certain challenges of transparency (Pasquale, 2015), represents only an instrumental development, as the one which has always characterized the evolution of human beings. There would therefore be absolutely nothing different in the deployment of these technologies, other than a simple question of scale. The problem with the deflationary perspective in AIE, the closer we get to the end of the spectrum, is that it tends to rely more and more on an understanding of human beings 10 as disembodied liberal individuals, as a users of means and instruments to achieve their ends, completely in control of their instruments and their own thoughts, and who defines themselves independently of the social and material (especially technological) conditions that characterize their de facto situation. 1.2.3 Characteristics of the instrumentalist position Taken to the far-end of the argumentative spectrum, the deflationary perspective leads to a position in AIE we call instrumentalist, i.e., a position that assigns value to the deployment of AI exclusively according to the use which is made of it. We therefore propose to call this position "maleficence or beneficence" (MoB), since it evaluates AI only in terms of individual users and their purposes. This perspective usually makes individuals bear the responsibility to protect themselves, for example, by placing an inordinate amount of importance on the issue of consent. The instrumentalist position in AIE is based on the idea that technology is inherently neutral and that it is only indirectly morally charged, depending on what individual users decide to do with it, the focus of value resting exclusively on them. To solve the ethical problems raised by the development of AI technologies, it would therefore suffice to regulate their use in order to reduce the risk of malfeasant use. For the instrumentalist position at the far-end of deflationism, AIE is therefore equivalent to an applied ethics of artificial intelligence (AEAI). This AEAI should then highlight the sources of friction between the new technological capacities offered by AI and the social and legislative frameworks of liberal societies, for example the issues related to the definition of sources of legal liability in the face of the deployment of autonomous artificial agents (think of autonomous cars, Awad & al, 2018), significant changes in labour relations and human resources management (Maclure & Saint-Pierre, 2018), problems generated in terms of privacy and confidentiality (De Filippi, 2016), potential abuses that these technologies allow (Alter, 2018; Mayer-Schönberger, 2014), the biases they may induce (Boutin, 2006), etc. The instrumentalist position in AIE seeks to propose adjustments between these technological innovations and the legislative and 11 regulatory framework specific to liberal individual rights and fundamental freedoms, without however questioning or modifying this framework. 1.3 Ethical theories in the AIE debate When it comes to the AIE debate, it seems at first glance that the two options are therefore the PoD perspective of the substantialist position and the MoB perspective of the instrumentalist position. However, we think that the difference between these two approaches lies not in their ethical perspectives on AI, but in their assessment of technological development. As mentioned earlier, the two positions are of course different but, since these two positions (P) consist of the conjunction of a first component related to the evaluation of AI according to normative ethical frameworks (E) and a second component related to an interpretation of the philosophy of technology (T), each position being thus the result of these two components (P = E + T), it seems that the difference comes from the T component of the equation. For these two extreme positions in AIE, as for most of the intermediate positions on the argumentative spectrum they delineate, the E component remains unchanged. On the basis of the now established expression "business as usual", we argue that we are dealing here with positions that, despite their differences, defend the same ethical perspective in AIE, namely the one we propose to call "ethics as usual" (EaU), i.e. that the development of AI does not fundamentally require a change in the frameworks of classical normative ethics. It is as if the options of classical normative ethics (deontology, consequentialism and virtue ethics) could simply be applied to the issues raised by the technological novelty of AI, even though inflationary and deflationary perspectives do not agree on the nature of these issues. However, the theoretical limitations of these theoretical approaches in ethics are well known and go well beyond the issues raised by the deployment of AI. It is not possible to revisit here these theoretical debates that have been abundantly documented for several decades or even centuries. However, it can be said that we find it very problematic to simply take up one of these theoretical approaches and apply it directly to the ethical issues related to the deployment of AI, even more so to compensate for the limitations of one approach by borrowing, eclectically and indiscriminately from the conceptual framework of 12 other theories. Rather, there is a need to develop a coherent approach based on clear principles to properly think about the role of each of the classical normative ethical frameworks in the development of a normative theory that can take into account current issues related to the deployment of AI and the ubiquity of ICT (on this issue, see van den Hoven 2010). We also believe that the opposition between substantialist and instrumentalist positions at the extremes of inflationism and deflationism is more of a system of thought than an opposition, since the extreme positions are ultimately mutually reinforcing each other. Instrumentalists will justify their position by blaming the substantialist for being distracted by uncertain speculation on unlikely scenarios, since they rely on an understanding of AI (especially AGI and GOFAI) that is not adequate given the current development of these technological innovations (which is quite reasonable). For their part, the substantialists justify their approach by reasonably criticizing instrumentalists for underestimating the disruptive aspect of the deployment of AI and thus underestimating the theoretical work which is necessary to think a satisfactory approach to the ethical issues related to AI, thus limiting AIE to an AEAI, which leads instrumentalists to neglect the fundamental philosophical questions that these technological innovations can generate in terms of understanding of the self, the world and the relationships with others for the individuals who use them. For this reason, ethical positions surrounding the deployment of AI often appear to be at a dead end between a position that has significant limitations due to an understanding of AI that does not correspond to the reality of the development of AI systems and a position that appears to be constrained, given the philosophical assumptions on which it is based, to deal only with AI application issues which, while they may be important and relevant to the development of an AIE, do not exhaust the range of fundamental issues associated with AI deployment. Therefore, a middle ground position in the assessment of the fundamental ethical issues of AI that avoids this sterile dichotomy should be favoured. It must be noted that a great number of recent positions in AIE rely on a perspective that avoids these antipodes. Thus, there are several examples of such middle ground positions. However, most of these middle ground positions in the AIE 13 argumentative spectrum are still based on an EaU approach, which also needs to be questioned given the well-known problems of classical normative ethical theories, both theoretically and in relation to technology ethics (see 2. 2) - a diagnosis on ethical theory that was already made by Hans Jonas (1979), i.e. that ethics must be reconsidered in the face of technological development (Bruneault, 2012), given the transformation of human action and the unprecedented magnitude of human actions generated by technological development, which is confirmed with the exponential development of the digital world. Another set of criticisms of the AIE approaches must also be addressed. Indeed, many defend a critical political perspective on the social deployment of AI that emphasizes the power relationships involved in digital development and algorithmic governance - issues that too often remain peripheral in EaU approaches. This critical political perspective on AI (which would deserve to be the subject of a full paper, which is unfortunately not possible to discuss in detail here) is an essential and too often neglected component of AIE. It is therefore necessary to think of an approach to AI that plays on the field of the ethical debate without being limited to traditional normative theoretical approaches, while also integrating the questions associated with such a critical political perspective, in order to avoid reproducing a tendency that EaU approaches have in AIE, that of favouring the elaboration of declarations or action guides in the form of a list of general, not hierarchically organized principles that are not easily transferable as such to practices (Mittelstadt, 2019). A satisfactory position in AIE must therefore meet three conditions: 1- avoid substantialist and instrumentalist positions by developing an AIE perspective that is reasonably deflationary, while making it possible to 2- situate the ethical issues related to the deployment of AI in the socio-political framework associated with the ubiquity of ICT in our societies and 3- rethink the theoretical frameworks of classical normative ethics in order to update them and develop a coherent explanation of the theoretical foundations of the position defended in AIE. While several recent positions in AIE fulfil the first condition, and a great number of them the second, we argue that few propositions seek to respond to the third. We think accordingly that the informationalist position in AIE defended by Luciano Floridi is particularly 14 interesting since it meets all three requirements. We will now explain why we think it allows us to develop an analytical framework that avoids some usual conceptual pitfalls in AIE. 2. Informationalist position in AIE Faced with the inadequacies of both the PoD and MoB perspectives in AIE, as well as the EaU approaches to AIE and taking into account the critical political perspective, the informationalist position in AIE seeks to develop an ethical framework that can adequately address the fundamental philosophical questions raised by the deployment of AI, while avoiding presupposing a development of AI that does not correspond to the technical reality. We will now (2.1) see the ethical position on AI defended by the informationalist position, we will then (2.2) examine its philosophical presuppositions, and we will (2.3) underline the applications we can expect from it. 2.1 Informationist position on AIE The informationalist position on AI, like other deflationary perspectives, states that current AI systems are weak AI systems and that the prospects of AGI, like the one presupposed by the substantialist position, do not correspond to current technological conditions and those expected in a foreseeable future (Dignum, 2019; Domingos, 2015; Floridi, 1999). However, unlike the instrumentalist position and other deflationary perspectives, the informationalist position argues that the social deployment of AI involves much more than simply accommodating individual rights with these technological innovations. Echoing several current analyses that emphasize the changes that the social deployment of AI implies in our relationships to the world, to ourselves and to others (Broussard, 2018; Cheney-Lippold, 2017; Cohen, 2012; Dignum, 2019; Floridi, 2014; Freitag, 2018; Hansen, 2006; Rouvroy & Stiegler, 2015), the informationist position argues that it is necessary to address these fundamental philosophical issues in order to develop a satisfactory approach to AIE. Furthermore, the informationalist position argues, unlike most of these middle ground positions in the argumentative spectrum in AIE, that a satisfactory position in AIE cannot simply be an EaU. It is therefore necessary to develop a new ethical approach which, on the one 15 hand, allows us to think the ethical issues raised by AI in all their complexity and which, on the other hand, makes room for the modifications that the theoretical frameworks of classical normative ethics will have to undergo in order to take into account the completely new aspects of the current situation of ubiquity of ICT and AI. The informationalist position is based on information ethics, as developed by Luciano Floridi (2013). Grounding his ethics on his philosophy of information (Floridi, 2011), he argues that the current situation, linked to the development of ICT and the associated digital revolution, requires us to think anew the very idea of information, in order to make it the central pivot of philosophy, in particular of a satisfactory ethical approach to current issues in our societies (Floridi, 2013: 1). Accordingly, AI should be ethically evaluated in terms of the capacities it can provide and the problems it could generate in the "informational flourishing", especially that of individuals, groups of individuals and societies living in the material and informational conditions resulting from the deployment of AI (Floridi, 2013: 169). 2.2 Philosophical Informationalist Assumptions The philosophical presuppositions behind the informationalist position in AIE are clearly set out by Floridi himself (2014). The first point he makes about the information societies in which we live, shaped by the deployment of AI, is what he calls the ubiquity of ICT. The rapid and exponential development of ICT requires that we completely rethink the philosophical assumptions on which we understand our societies and ourselves (Floridi, 2013: 8). This development of ICT highlights the centrality of the concept of information for individuals and societies. Floridi argues that this is a fourth revolution (Floridi, 2013: 13), meaning that after the Copernican revolution which challenged the idea that the Earth is at the center of the universe, after the Darwinian revolution that challenged our special status in life and after the Freudian revolution which challenged the Cartesian idea of an individual totally in control of herself and her thoughts, we are now facing a fourth revolution, namely the information revolution, which not only shows that the concept of information is at the very heart of our human identity, but 16 that today we are no longer the only entities capable of producing meaning and information, since we have now ourselves built machines that are capable of creating and manipulating this information, namely ICT and AI. According to Floridi, we have thus entered with the digital revolution into what he calls hyperhistory (Floridi, 2013: 3). Humanity moved from prehistory to history when the first ICT appeared, the leap from prehistory to history being that of the shift from a society that continuously reproduce the same life cycles to a society that is able to preserve, notably through writing, certain forms of collective memory that allow the development of the complex and elaborate cultures we know. Today, ICT have taken on a new importance in the development of our societies. Throughout history, ICT have been used to compile and record events that otherwise followed their own path, be they social, economic or political. Today, ICT have become the very engine of social, economic and political systems so that they occupy the central role in the development and reproduction of information societies, at least at the beginning of the twenty-first century, in the post-industrial societies of the G7 countries. Just as human beings are part of the living world, the biosphere, Floridi argues that they also evolve in an informational universe that can be called the infosphere (Floridi, 2013: 8), a universe that was implied in earlier phases of the development of human societies and which is now becoming an essential component of them, since "only very recently has human progress and welfare begun to depend mostly on the successful and efficient management of the life-cycle of information" (Floridi, 2013: 3). Human beings evolve in this universe of meaning and significance, which becomes quite apparent once ICT have unleashed information management capabilities beyond the limitations that existed previously, once we enter the hyperhistory. It is therefore necessary to think anew the ethical issues raised by the deployment of AI, by understanding these questions according to this universe of meaning. Floridi also proposes a neologism, the expression "onlife", to characterize our current relationship to the digital world (Floridi, 2015). The difference that was still made, until a few years ago, 17 between "offline" and "online" life is becoming increasingly blurred, if not totally erased, so that our lives are today constantly connected to the digital world because of ICT and their ubiquity (on issues related to "ambient intelligence", see Costa, 2016). It is therefore necessary to reflect on the philosophical and ethical issues involved in the deployment of AI, acknowledging this "onlife" and the blurring that this new dimension of our existence implies for many spheres of our lives. Floridi (2015: 7) identifies four aspects of this blurring: 1- the blurring of the distinction between real and virtual worlds, 2- the blurring of the distinctions between nature, people and artificial creations, 3- the shift from information scarcity to what is now called "infoglut", the overabundance of information, and 4- the shift from a metaphysics of entities to a metaphysics of processes, since we increasingly resort not to material and physical entities to describe our world, but to the relationships between these entities. AIE must therefore necessarily be based on an information ethics that allows us to think about these changes. Information ethics thus makes it possible to adopt a position in AIE that is generally deflationary (compared to the one defended by transhumanists, for example), while at the same time providing a conceptual framework that makes it possible to account for the disruptive aspect of these new technologies, which fulfils the first condition for a satisfactory position in AIE. This information ethics must therefore be "a non-standard (because patient-oriented), ontocentric and e-nvironmental macroethics" (Floridi, 2013: 97). First, information ethics is a nonstandard (Floridi, 2013: 62) patient-oriented ethics. Unlike the theoretical frameworks of classical normative ethics that give moral value based on 1- an evaluation of the moral agent (as in virtue ethics) or 2- an evaluation of the action taken (as in consequentialist and deontological ethics), information ethics rather seeks to evaluate actions or situations according to the impact they have on the entities that suffer these impacts, to morally evaluate the possibilities offered to these entities in terms of "informational flourishing" (Floridi, 2013: 74). Information ethics is ontocentric (Floridi, 2013: 65); it values informational entities, whatever they may be, according to the degree of informational organization they embody. Since all existing entities are informational entities, however simple they may be, it follows that they 18 should all be considered, at least minimally, in the ethical assessment of actions or situations. Unlike anthropocentric ethics, which value only human beings, and unlike biocentric ethics, which value only living beings, Floridi proposes to give a non-absolute value to any informational entity and thus to any existing entity, to all beings. Information ethics must finally be an “e-nvironmentalism” (Floridi, 2013: 18; Floridi, 2014: 217) that must be distinguished from ethical environmentalism as it was developed in the twentieth century, since the “e-nvironmentalism” of information ethics must not oppose the artificial world created by human beings to an untouched nature that would be the environment of the human world. On the contrary, “e-nvironmentalism” acknowledges that the environment, from which human beings act and human societies develop, is increasingly made up of artificial objects created by human beings. The ethical dimension of human action must therefore be thought of not only in the natural environment (in the classical sense), but also while considering that this environment is itself to a large extent the result of human actions and decisions of previous generations. In doing so, current actions and decisions must also be evaluated in terms of the role they will play in building the environment from which future generations will have to act. This aspect of information ethics thus builds a bridge both between individuals of the same age, while considering the social dimension of human existence, and between generations. It also refers to a central component of Floridi's approach, namely his method of abstraction (MoA), which distinguishes different levels of abstraction (LoA) for analysing any entity (Floridi, 2013: 29). According to this MoA, it becomes possible to assess the ethical impact of the deployment of AI, not only on individuals, but at different LoA - individual, interpersonal, relational, for the groups, societies, generations, etc. (Floridi, 2013: 29). This social dimension, which echoes the second condition for a satisfactory position in AIE, will moreover be further developed in the political application of information ethics (see 2.3). Taking up the concept of entropy, borrowed first from thermodynamics and then from cybernetics (Wiener, 1954), Floridi argue that we should understand his information ethics in terms of 'informational flourishing' and 'informational entropy', i.e. what respectively favours 19 informational development and what rather leads to informational disorganization in the infosphere. Accordingly, he proposes four fundamental principles for information ethics (Floridi 2013: 71), freely inspired by Isaac Asimov's famous laws of robotics, namely : 0- entropy ought not to be caused; 1- entropy ought to be prevented; 2- entropy ought to be removed 3- the flourishing of informational entities (and of the whole infosphere) ought to be promoted. These principles of information ethics should therefore apply to actions and situations for all informational entities, whatever they may be, in a non-absolute way, while acknowledging the possibility of granting a special, possibly absolute, status to informational and semantic entities, which create meaning and are capable of understanding the meaning of information, i.e. human beings. One can therefore consider evaluating AI according to its capacity (for each AI system separately) to foster "informational flourishing" or to create "informational entropy" in the infosphere, considering the impact of these AI systems on all informational entities, especially on human beings as informational and semantic entities. These peculiarities of information ethics allow us to think anew the theoretical frameworks of classical normative ethics, while considering the ubiquity of ICT and AI in our societies, which fulfils the third condition for a satisfactory position in AIE. 2.3 Informationalist position: applications The informationalist position in AIE is furthermore complementary with an AEAI, since it does not deny that the latter, as developed from a general deflationary perspective, is an inescapable component of AIE. Nor does it aim to supplant the theoretical frameworks of classical normative ethics, but it rather updates these approaches in order to think them anew (Floridi, 2013: 77). The contribution of the informationalist position in AIE to applied ethics is significant (e.g. see Floridi's paper on autonomous artificial agents in Anderson & Anderson, 2011: 184). For example, it is possible to highlight the value of the interpretation of ethical issues related to the protection of privacy put forward by information ethics (Floridi, 2013: 228). Contrary to conventional 20 interpretations associating the unwanted collection of personal data with theft, information ethics proposes that, if human beings are informational and semantic entities, it follows that their personal data are not a property that individuals possess in the same way as material goods. Rather, their personal data are a description of what these individuals fundamentally are (Floridi, 2013: 242). Privacy protection is therefore an essential component of the evaluation of actions and situations for information ethics, since once we understand the primacy of the concept of information in personal identity, we realize that personal data is an intrinsic part of what individuals are themselves. A breach of privacy is thus more akin to kidnapping than theft. It should also be noted that the political interpretation of the informationalist position in AIE is linked to four ideas: a new definition of power, deterritorialization, the separation of identity and coherence, and the continual questioning of consent (Floridi, 2015: 51). Such political considerations, which cannot be developed further here, echo the technological capabilities approach (Brey, 2011; Costa, 2016; Oosterlaken & van den Hoven, 2012; Stahl, 2007; Zheng & Stahl, 2011), but also the political critical perspective, particularly through critical analyses of new algorithmic powers (Ananny & Crawford, 2016; Andrejevic, 2013; Bucher, 2018; De Grosbois, 2018; Dean, 2009; Freitag, 2003; Morozov, 2013; Nissenbaum, 2010; Rouvroy & Berns, 2013; Susskind, 2018; Taylor 2014; Whittaker, 2019), approaches in critical theory of technology (Feenberg, 1991 & 2010) and in critical theory of ICT (George & Kane, 2015; Mondoux, 2011; Mondoux & Ménard, 2018; Ouellet, 2016; Ouellet & al. , 2015). The informationalist position on AI policy issues is complementary to the political critical perspective. It is not intended to supplant or replace the latter; on the contrary, it seeks to offer a satisfactory interpretation of the issues at stake in AIE in a way that can be in tune with critical analyses including the balance of power, reification and social issues related to the deployment of AI. IE thus fulfils the second condition for a satisfactory position in AIE. We therefore argue that the informationalist position is a very promising perspective in order to develop a position in AIE that is neither catastrophist, nor dependent on technological solutionism, while it is not disproportionately deflationary, it takes into account the political 21 issues that emerge from the social deployment of AI and it addresses the limits of the theoretical frameworks in classical normative ethics. Such an approach makes it possible to cover all aspects of AI, in professional ethics as well as in applied ethics, normative ethics and meta-ethics. It is thus a good starting point for the development of a comprehensive AIE that is appropriate to the social ubiquity of ICT. AIE is still at an early stage of its deployment; let us hope that this paper can make a significant contribution to its further development. 22 Bibliography Alter, A. (2018). Irresistible, The Rise of Addictive Technology and the Business of Keeping Us Hooked. New York : Penguin Books. Ananny, M. & K. Crawford. (2016). Seeing Without Knowing: Limitations of the Transparency Ideal and its Application to Algorithmic Accountability. New Media & Society. 20(3), p. 973-989. Anderson, M. & S. L. Anderson (ed.). (2011). Machine Ethics. Cambridge : Cambridge University Press. Andrejevic, M. (2013). Infoglut, How Too Much Information Is Changing the Way We Think and Know. New York: Routledge. Awad, E. & al. (2018). The Moral Machine experiment. Nature, 563, p. 59-77. Bengio, Y. & S. Luccioni. (2019). On the Morality of Artificial Intelligence. arXiv:1912.11945 [cs.CY], December 2019. Boddington, P. (2017). Towards a Code of Ethics for Artificial Intelligence. New York : Springer. Bostrom, N. (2014). Superintelligence, Paths, Dangers, Strategies. Oxford: Oxford University Press. Boutin, E. (2006). Biais cognitifs et recherche d'information sur internet. Quelles perspectives pour les indicateurs de pertinence des moteurs de recherche. VSST 2006. https://archivesic.ccsd.cnrs.fr/sic_00827309/document Brey, P. (2010). Values in Technology and Disclosive Computer Ethics. In Luciano Floridi (ed.). The Cambridge Handbook of Information and Computer Ethics. Cambridge : Cambridge University Press, p. 41-58. Broussard, M. (2018). Artificial Unintelligence, How Computers Misunderstand the World. Cambridge, MA: MIT Press. Bruneault, F. (2012). Comment définir une éthique pour notre civilisation technologique? L’apport d’une lecture conjointe des pensées de Karl-Otto Apel et Hans Jonas. Laval théologique et philosophique, 68(2), p. 335-357. Bryson, J. J. (2019). The Past Decade and Future of AI’s Impact on Society. In Towards a New Enlightenment?. BBVA. bbvaopenmind.com/wp-content/uploads/2019/02/BBVA-OpenMind-book-2019-Towards-a-New-Enlightenment-ATrascendent-Decade-3.pdf Bucher, T. (2018). If… Then, Algorithmic Power and Politics. Oxford: Oxford University Press. Cheney-Lippold, J. (2017). We Are Data, Algorithms and the Making of Our Digital Selves. New York: New York University Press. Cohen, J. E. (2012). Configuring the Networked Self, Law, Code, and the Play of Everyday Practice. New Haven & London: Yale University Press. Costa, L. (2016). Virtuality and Capabilities in a World of Ambient Intelligence, New Challenges to Privacy and Data Protection. New York : Springer. De Filippi, P. (2016). Gouvernance algorithmique : Vie privée et autonomie individuelle à l’ère des Big Data. In Primavera De Filippi & Danièle Bourcier (dir.). Open Data & Data Protection : Nouveaux défis pour la vie privée. Mare & Martin. De Grosbois, P. (2018). Les Batailles d’Internet, Assauts et résistances à l’ère du capitalisme numérique. Montréal : Écosociété. 23 Dean, J. (2009). Democracy and Other Neoliberal Fantasies, Communicative Capitalism and Left Politics. Durham, NC: Duke University Press. Déclaration de Montréal pour un développement responsable de l’intelligence artificielle. (2018). Université de Montréal. Dignum, V. (2019). Responsible Artificial Intelligence. How to Develop and Use AI in a Responsible Way. Springer. Domingos, P. (2015). The Master Algorithm, How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books. Feenberg, A. (1991). Critical Theory of Technology. Oxford: Oxford University Press. Feenberg, A. (2010). Between Reason and Experience. Essays in Technology and Modernity. Cambridge, MA: MIT Press. Floridi, L. (1999). Philosophy and Computing, An Introduction. London: Routledge. Floridi, L. (2011). The Philosophy of Information. Oxford: Oxford University Press. Floridi, L. (2013). The Ethics of Information. Oxford: Oxford University Press. Floridi, L. (2014). The 4th Revolution, How the Infosphere Is Reshaping Human Reality. Oxford: Oxford University Press. Floridi, L. (ed.) (2015). The Onlife Manifesto, Being Human in a Hyperconnected Era. New York : Springer. Freitag, M. (2003). La dissolution systémique du monde réel dans l’univers virtuel des nouvelles technologies de la communication informatique : une critique ontologique et anthropologique. Dans Mattelart, A. et Tremblay, G. (dir.) 2001 Bogues : Communication, démocratie et globalisation. tome 4, Québec : Presses de l’Université Laval, p. 279296. Freitag, M. (2018). La société informatique et le respect des formes. Le Naufrage de l’université et autres essais d’épistémologie politique. Montréal : Alias. George, É. & O. Kane. (2015). Les technologies numériques au prisme des approches critiques : éléments pour l’ébauche d’une rencontre. Canadian Journal of Communication, 40, p. 727-735. Gibert, M. (2020). Faire la morale aux robots. Une introduction à l’éthique des algorithmes. Montréal : Atelier 10. Jobin, A. & al. (2019). The Global Landscape of AI Ethics Guidelines. Nature, p. 389-399. Johnson, D. G. & M. Verdicchio. (2017a). Reframing AI Discourse. Minds and Machines, 27(4), p. 575-590. Johnson, D. G. & M. Verdicchio. (2017b). AI Anxiety. Journal of the Association for Information Science and Technology. 68(9), p. 2267-2270. Jonas. H. (1979). Das Prinzip Verantwortung, Versuch einer Ethik für die technologische Zivilisation, Frankfurt am Main: Suhrkamp Verlag. Hansen, M. B. N. (2006). Bodies in Code, Interfaces with Digital Media. New York: Routledge. 24 Hunyadi, M. (2018). Le Temps du posthumanisme, Un Diagnostic d’époque. Paris : Les Belles Lettres. Kurzweil, R. (2007). Humanité 2.0. La bible du changement. M21 Éditions. Levesque, H. J. (2017). Common Sense, the Turing Test, and the Quest for Real AI. Cambridge, MA: MIT Press. Maclure, J. (2019). The New AI Spring: A Deflationary View. AI & Society. A Journal of Knowledge, Culture and Communication. https://doi.org/10.1007/s00146-019-00912-z Maclure, J. & M.-N. Saint-Pierre. (2018). Le nouvel âge de l’intelligence artificielle : une synthèse des enjeux éthiques. Les cahiers de propriété intellectuelle, 30(3), p. 741-765. Mayer-Schönberger, V. (2014). La Révolution Big Data, Politique étrangère, 4, p. 69-81. Mittelstadt, B. (2019). Principles Alone Cannot Guarantee Ethical AI. Nature, p. 501-507. Mondoux, A. (2012). À propos du social dans les médias sociaux. Terminal, 111, p. 69-79. Mondoux, A. (2011). Identité numérique et surveillance. Les Cahiers du numérique, 7(1), p. 49-59. Mondoux, A. & Ménard, M. (2018). Big Data et société, Industrialisation des médiations symboliques. Québec : Presses de l’Université du Québec. More, M. (1990). Transhumanism : Towards a Futurist Philosophy. Extropy, 6. Morozov, E. (2013). To Save Everything, Click Here. Philadelphia, PA: PublicAffairs Books. Nissenbaum, H. (2010). Privacy in Context: Technology, Policyand the Integrity of Social Life. Stanford, CA: Stanford University Press. Oosterlaken, I. & J. van den Hoven (ed.). (2012). The Capability Approach, Technology and Design. New York : Springer. Ouellet, M. (2016). La Révolution culturelle du capital, le capitalisme cybernétique dans la société globale de l’information. Montréal : Écosociété. Ouellet, M. & al. (2015). Big Data et quantification de soi : la gouvernementalité algorithmique dans le monde numériquement administré. Canadian Journal of Communication, 40, p. 597-613. Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press. Rouvroy, A. (2018). Homo juridicus est-il soluble dans les données?. In Cécile De Terwangne, Élise Degrave & Séverine Dusollier (dir.). Law, Norms and Freedoms in Cyberspace, Droit, normes et libertés dans le cybermonde. Bruxelles : Larcier. Rouvroy, A. & B. Stiegler. (2015). Le régime de vérité numérique. Socio. 4. http://journals.openedition.org/socio/1251 ; DOI : 10.4000/socio.1251 (Consulté le 18 août 2019). [En ligne] Rouvroy, A. & T. Berns. (2013). Gouvernementalité algorithmique et perspectives d’émancipation. Réseaux. 177. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. 25 Russell, S., D. Dewey & M. Tegmark. (2015). Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine, Association for the Advancement of Artificial Intelligence. Stahl, B. C. (2007). Ontology, Life-World, and Responsibility in IS. In. Raj Sharman, Rajiv Kishore & Ram Ramesh (eds.). Ontologies. A Handbook of Principles, Concepts and Applications in Information Systems. Springer. Susskind, J. (2018). Future Politics, Living Together in a World Transformed by Tech. Oxford: Oxford University Press. Taylor, A. (2014). Démocratie.com, pouvoir, culture et résistance à l’ère des géants de la Silicon Valley. Montréal : Lux. Tegmark, M. (2017). Life 3.0, Being Human in the Age of Artificial Intelligence. New York: Vintage Books. Turkle, S. (2005). The second Self: Computers and the Human Spirit. Cambridge: MIT Press. van den Hoven, J. (2010). The Use of Normative Theories in Computer Ethics. The Cambridge Handbook of Information and Computer Ethics. Cambridge: Cambridge University Press. Whittaker, M. (2019). Artificial Intelligence: Societal and Ethical Implications. United States House of Representatives Committee on Science, Space, and Technology. June 26. https://science.house.gov/imo/media/doc/Whittaker%20Testimony.pdf Wiener, N. (1954). The Human Use of Human Beings, Cybernetics and Society. New York: Doubleday. Zheng, Y. & B. C. Stahl. (2011). Technology, Capabilities and Critical Perspectives: What Can Critical Theory Contribute to Sen’s Capability Approach?. Ethics and Information Technology, 13(2), p. 69-80.