search

Dialogue with Charles Corval

Charles Corval is a doctoral student in political science at Sciences Po Paris, his research focuses on the ethical and political issues of the development of smart and connected objects.

What memories do you have of the Derrida and Technology colloquium which was your first intervention of this type, organized by Joseph Cohen and Raphael Zagury Orly? How did you come to study the question of robot law?

Charles Corval: This symposium had a great impact on me, because it admirably testified to the vitality of Derrida’s thought. The contributions each showed how deconstruction continues to mark the philosophical field both as a resource for thought, but also as an object of research. Remarkable interventions by Raphaël Zagury-Orly and Joseph Cohen offered a clear vision of how technology sits at the very heart of current philosophical work. The various research presented during the symposium also makes it possible to grasp the extent to which certain trends in contemporary technologies meet religious themes and theories of art – from self-writing to cinema. I am thinking, for example, of the tele-presence that we all experienced collectively during the health crisis. There is also the massification of data capture, the result of which is to produce new forms of actors (called “artificial intelligence”), thanks to the contributions and traces of millions of people. These forms of actors provide experiences that can be compared to Derridian hauntology, as Michaela Fiserova did in her contribution. A text such as Échographies de la Télévision, in which Derrida discusses audiovisual issues with Bernard Stiegler, already gives ways to think about these objects, whereas at the time, they were still only very distant prototypes. Gérard Bensussan‘s presentation on improvisation finally recalled the role of chance, of singular variation, as well as its both aporetic and creative dimension, in the midst of technical applications.

Regarding my approach to technology and Derrida’s thought, it can be said that it has its origins in the progression of my academic work. My first major academic work, but still intellectually immature, was on the myth of art in Malraux. The resulting question then related to value judgments concerning objects. The fact that art benefits in our social practices from a form of sacredness that most other products do not benefit from then seemed to me to be a crucial question. How has our relationship to technology gradually distinguished itself and almost opposed to that which we have with works of art? It is to explore this question that I proposed, in my master’s theses, respectively at the Sorbonne and at Sciences Po, research on the philosophical reception of automatons in the 18th century and on the claim of rights for robots. The first work was historical and should allow me to reflect on the perception that the French philosophers of the Enlightenment had of this kind of object. The lesson of this work is that automatons and androids were then still closely perceived as sculpted and painted works. They do not belong to an extra-artistic domain, but on the contrary are inscribed as real works where science, art, and philosophy meet. Automata and androids are indeed terms already defined and discussed in the Encyclopédie of Diderot and d’Alembert. They directly ask questions about the human condition. These objects then have a great plurality of use in the production of a philosophical and political discourse. Scientifically, tactically and existentially, automatons give rise to a conceptual renewal: they open up to the ideal of a perfect and complete visualization of the human body; they question what human freedom means; they also offer resources to exercise taste and intelligence, for example, by trying to understand how they work. It is this use of the technical object as a means of thinking further in philosophy that then oriented me towards contemporary issues. My work continued in the analysis of the claim for ethical or legal recognition of robots. I found there a technical object which, in two centuries, has certainly changed several times in form, but still often retains the ideal of imitating the human body or small animals. The robot dogs Spot from Boston Dynamics, the Paro, imitating a seal, from Takanori Shibata, or the Peppers from Aldebaran, take forms analogous to Fluteur or Duck from Vaucanson. It is this contemporary work that the conference on Derrida and technology allowed me to present. Because the claim of rights for robots, if it is largely based on arguments of a legal nature or coming from North American philosophy, also mobilized French thinkers of the end of the 20th century, in particular Derrida, Levinas and Latour. It is this mobilization that constitutes the final question of this study: why have contemporary societies come to mediate and seriously discuss the possibility of recognizing a right for robots? My answer, briefly stated, is that the critique of old classical humanism has made it more difficult to immediately reject any moral obligation to the non-human, whether natural or technical. This criticism leaves us with the moral uncertainty of our duties to things that go beyond the circle of humanity. This, it seems to me, is what can be observed quite explicitly in the speeches of current philosophers of technology such as David Gunkel or Mark Cockelbergh, or Peter-Paul Verbeek. It should however be specified that it is not a question here of saying that this uncertainty has to be reabsorbed: ethics is also discovered insofar as it cannot be based on abstract and easy rules, or on metaphysics simple and uncritical. However, this openness to uncertainty leaves room for attempts to impose new systems of rights whose consequences can be dangerous, and which, under the guise of protecting the non-human, would amplify the terrible dominations already suffered by a large number of people. of individuals all over the world. Thus we have seen robotics lawyers and thinkers obtain a large audience by asking for a right for robots without really mentioning the effects on property rights, on the way of life of those who have to live or work with them. robots, or even on the economic interests that will be nurtured in this way. I am thinking, for example, of private companies directly integrated into the robotization of our societies, but also of insurance companies and companies in charge of IT security. Fortunately, reactions from experts in the field quickly dismissed the potential realization of robot rights that would excessively relieve private actors of their responsibilities.

Could you tell us about your thesis subject: “Governing connected objects: the establishment of political norms of justice in the deployment of new relationships between humans and objects”?

Charles Corval: My thesis subject directly pursues my questioning of the ethics of robotics and its uncertainties. My research hypothesis is that it is possible to find in the experiences of connected and/or intelligent objects significant ethical and political questions. By insignia, we must understand here not only worthy of reflection, but above all capable of directing the understanding and perceptions. The choice of this hypothesis is justified insofar as today there is an extremely rapid accumulation of new technological experiences, without these having been the subject of any clarification as to what they induce or what they open up as possibilities. Indeed, it is still common to find in the social sphere discourses that disconnect the political stakes from technological decisions, as if the latter could have full neutrality. There are now also many critical and general interpretations of the digital phenomenon. These still focus little on the digital experience and tend to make it an alienation or at best a source of resistance for the avant-gardes aware of the dominations in progress. My perspective seeks a specific place between these various possibilities: starting from the lived experience of a technology to produce a discourse on what it may morally imply.

I therefore defend that three experiences have become massively imposed recently and give reasons to rethink our relationship to technological objects. The first concerns anthropomorphism: artificial intelligence by spreading in the digital world constantly puts us in front of objects that use fragments of cognitive faculties to assist us. I’m not talking here just about robots, but about all those objects that become capable of notifying things and therefore take an active role in the lives of their users. In my opinion, this experience of anthropomorphism, and this increasingly strong mix between machines and human ways of interacting, invites us to think of the world of objects from the angle of their passivity and their vulnerability. We find ourselves faced with phenomena of artificial empathy and attachment to our objects which are linked to their “intelligence”. Here, intelligence is nevertheless reduced to the sole ability to simulate humanity – and we must beware of this marketing which reduces intelligence to simple calculations. But still, this type of experience puts you in a situation where technology appears as a quasi-subject. From there arises, even outside the ecological framework, the question of what we owe to these objects and the good behavior that we must adopt towards them. The second experience to have spread widely is that of surveillance. There are more and more situations where individuals can monitor their surroundings. Air purifiers monitor and perceive household pollution. Smart bulbs can tell us when they turn on and when they turn off. Heaters can tell us about our electricity consumption. Connected refrigerators promise to be able to diagnose the state of our food. It is not necessary to multiply the examples more to see that if the theme of the surveillance of humans by other humans has largely imposed itself as an issue in the public sphere, the surveillance of objects and environments by humans is also in full development. The dangers to privacy are obvious. What interests me is the position that the individual takes vis-à-vis this surveillance: because it can be used to take care of his environment as well as to feed paranoid or narcissistic behavior.

The third experiment concerns the networking of our objects. We now experience how private properties in our intimate spaces connect to a mass of other users, data processing centers, and hubs of human labor. This connection thus operates a certain number of silent choices in the updates that our products receive and in the way in which they will change over time. This situation confronts us with our daily dependence: our own goods depend on sometimes distant human organizations. Just think of the individuals who have had their Huawei phones stop working due to the geopolitical crisis between the United States and China. Of course, there are also telephones, cars or computers, which undergo updates that sometimes change uses by directly removing certain functions. In short, the connection of objects practically breaks the idea that an individual can be the sole master of his objects. It experiments with the dependence and complexity of modern technological networks. This dependence does not only exist for connected objects, but these make it extremely visible. Contrary to the old industrial design which tended rather to introduce the dependence between the user and the producer in the advertising and in the choices concerning the maintenance of the product, here the dependence is felt continuously and at a distance.

These experiments in my opinion draw a particular historical possibility: the fulfillment of certain duties of society and of individuals vis-à-vis their products. The word duty must here be understood as a relationship that we have to an exteriority. We have duties towards each other: for example, to abstain from certain behaviors and to prevent certain harms that it is possible to cause, even involuntarily, to others. If this notion of duty tends to apply more and more broadly – ​​to future generations, to memory, and to nature – it has not yet penetrated the field of products. However, I believe that the fact that products can look more and more like humans, that they can also be monitored and give the image of a general dependence between humans and non-humans, invites us to conceive of this extension of the duty . It easily finds an ecological meaning in responsible consumption, but I seek to show that it is possible to defend it through lived experience itself. Becoming acquainted with technologies that revolve around the extension of humanity gives an experience of specific otherness. I believe that it is possible to base on this experience – of the otherness of the simulated human, of the gaze mediated by technology, and of the hybridization between human work and the automatism of machines – an ethics. I consider that such an ethic is realized in the practice of conservation and in the corollary judgment: the technical object must be made to last. Conserving an object means keeping it with you, in a relationship that allows it to continue to be part of a life. Conservation as a duty is opposed to disregarding the conditions of existence of a technological object. Connected and intelligent objects can directly serve and reveal my inclusion in a complex world that is beyond me, that of uncertainty concerning the exceptionality of my humanity, my perceptions, and my autonomy. It is from this experience that it is possible to draw a new part respectively of an ideal of justice and of an ideal of the good life. Concretely, it is therefore a question of starting from the objects to think on the one hand, of the constraints, that is to say, of the rules which should be imposed on everyone in the use of technical objects; and on the other hand, obligations, that is to say moral judgments concerning good and bad behavior with these objects. In terms of constraints, if we accept that conservation is an objective that is housed in the very coherence of the technological experience, it is a question of regulating companies to encourage them to make objects that are more durable, more endearing, simpler to preserve and maintain. On the subject of individual responsibility, there must also be systems in place to encourage conservation. I am thinking, for example, of a penalty system, in the form of taxation for the renewal of certain objects in a way that is too short. Currently, according to ARCEP and the digital barometer, smartphones tend to be kept for less than three years in France. About a quarter of the renewals of these objects are motivated by the pleasure of novelty. It is this kind of phenomena that for me it is a question of theorizing from the angle of justice. At the level of obligations, the most important part of my work comes down to producing a set of arguments to make people recognize that a life dedicated to making technical objects last in their materiality and to maintaining them – whether for maintain or increase them – is good. At this level, I seek to respond to the various classical moral systems still discussed today in political theory. I contrast the short-term pleasure of consumption with a long-term pleasure linked to habits and the satisfaction of maintaining one’s environment. To the question of duty, which has traditionally been thought of as a human-to-human relationship, I oppose the idea that duty does not have to be based on anthropocentrism, but can well extend to an ideality not -human, whether through the anthropomorphic register of “as if” it were human, or through the recognition that objects themselves constitute human experience. At the level of virtues, I look for models of interviews and attachments to objects that can guide our actions while avoiding extreme forms of fetishism.

What influence did the thought of Frédéric Gros, specialist of Michel Foucault have on you ?

Charles Corval: The influence of Frédéric Gros can be found in my work through the question of the construction of subjectivity. In line with Foucault’s latest work, I try to understand, also relying on research on the Quantified Self and digital identity, how technical objects participate in the construction of subjectivity. From this perspective, technical objects are therefore an obvious issue of power: they educate and accustom to certain relationships; they broadcast gestures and images in daily life. I try to discuss Foucault’s works, in particular the questions linked to the panopticon, to the concept of “device”, and to the government of things, with the contemporary questions posed by the nudge as well as by the evolution of the ideal model that the neo-liberalism is made of the individual. My conclusions are that these objects, as cognitive assistants, impose a new model of individuality. This model is ecological in the sense that it is no longer possible to think of a rational individual only in the register of his mental faculties: henceforth, rationality is distributed in the technical environment to compensate for the irrational practices which posed so many problems to the liberal theories. Economic gains are then means of freeing the individual from the responsibility of thinking certain poles of his practice to entrust them to machines capable of calculating supposedly “without bias”. This time, freed for the mind, then makes it possible to limit moments of irrationality or at least to monitor and contain them.

Who will be the most relevant specialists and thinkers on this subject in the years to come ?

Charles Corval: The main current thinkers that I discuss and mobilize in my work belong to very different branches of the philosophy of technologies and political theory. Recently, I was very impressed by the work of Jean-Hugues Barthélémy, whose past work had already helped me a lot to understand Simondon. His latest works, in particular, the Society of Invention, for a philosophical architectonics of the ecological age, invite a massive reflection to rethink technology in the context of what he designates as “the ecological age”. I was also particularly marked by the work of Thierry Ménissier, Fabienne Martin-Juchat, and even Pierre-Antoine Chardel, my second thesis director. Everyone is trying to reflect on the philosophical moment we are living through: the massive renewal of technologies is producing new challenges for capitalism as it faces immense challenges and crises. Abroad, I would cite without hesitation Peter-Paul Verbeek, whose project for a new foundation of phenomenology based on technical issues continues with great fertility the approaches of already established thinkers such as Don Ihde and Andrew Feenberg. On surveillance, Zuboff’s latest book also promises to continue thinking about the evolution of surveillance capitalism; perhaps in a more critical vein, Cédric Durand’s work on feudalism is also an interesting economic approach to the phenomenon. Finally, we must mention the authors of neo-materialism, a rather English-speaking movement, which seeks to put materiality back at the heart of the thought of technical and political processes. Chief among them are thinkers like Karen Barad and Jane Bennett. On this movement, there are also crosses with French authors of the second half of the 20th century: mainly Foucault and Deleuze. Thomas Lemke recently produced a very interesting book on the relationship between Foucault’s theory of technology and this neo-materialist current.

Why did you join the University of California, Berkeley, of Steve Wosniak, the founder of Apple?

Charles Corval: I currently have a Fulbright scholarship for research at Berkeley. The objective of this trip is first of all to continue to discover thinkers capable of enlightening my understanding of connected objects – for example, I recently read an excellent article by Marion Fourcade, who works at Berkeley, whose objective is to condense the main results we have concerning the algorithmic society; I have rarely been able to read such a dense and clear summary of the social issues of digital technology. Then, it is for me to carry out field work by organizing interviews with engineers, designers, and product managers, in charge of developing connected objects. I hope to draw from these discussions reflections on how ethics and political issues fit into the production of these new objects. Finally, I hope to be able to find some sources concerning the origin of the key concepts of ubiquitous computing; I recently discovered that the Mark Weiser archives were available in the San Francisco Bay Area. A book by American academic John Tinnell (The Philosopher of Palo Alto Mark Weiser, Xerox PARC, and the Original Internet of Things) promises to publish a biography of the latter from these archives within a few months. My project is more humble: I seek to understand how the latter came to invent the very concept of ubiquitous computing, from which comes part of our current dreams of a great Internet of Things.

 

About the Author
Alexandre Gilbert is the director of the Chappe gallery.