The AI Who Loved Me
Nicolas Rousseau is a french professor of philosophy. He has published several essays with Henri de Monvallier : Blanchot l’Obscur (Autrement, 2015), La Phénoménologie des professeurs (L’Harmattan, 2020) Les Imposteurs de la philo (Le Passeur, 2019) and Les Mirages postmodernes (to be published in 2023). He has just published his first novel, The AI Who Loved Me, which imagines, in the near future, the arrival of increasingly conscious artificial intelligences, the arrival of artificial intelligences that are more and more conscious and similar to humans.
“Hubert Dreyfus] admitted that the really intelligent computer could exist one day; but according to him, when researchers will have really tackled the problem of the simulation of usual knowledge, […] they will realize that the required delay is not of the order of three hundred years, but more probably of three thousand years. […] But, from a philosophical point of view, one is tempted to say that time counts for nothing and that it does not matter whether the problem is solved in thirty years or in three thousand years, as long as no logical or conceptual impossibility prevents it from being solved one day.” Jacques Bouveresse, “Are Machines Intelligent?” (1985)
When it comes to artificial intelligence, we are better informed about the future than the present.
Every day, new articles speculate on the jobs that will disappear in the next ten years because of AI, on how it will disrupt our daily life and on its possible evolution towards a form of autonomy. This lack of information feeds all the fantasies, the most excessive hopes, and at the same time a deafening distrust of these machines which, each day, would nibble a little more at human prerogatives in terms of consciousness.
It comes as no surprise that Sam Altman, the boss of OpenAI, publishes communiqués about the arrival of Artificial General Intelligence (AGI) in an indeterminate future, a program so much more intelligent than the current ones that it will be capable of taking over all human tasks. His insistence on telling us that this super-AI will serve humanity is paradoxically worrying. Chat-GPT4 has barely been released and speculations about version 5, promised for December 2023, are already starting. If you’re reading me by then, AGI may have awakened: the Age of the Singularity has begun and made humans obsolete. In April 2023, this is still science fiction.
Finding information about AI today is no easy task. You would have to take a whole course on neural networks, deep learning, supervised learning, reinforcement learning, transfer learning… If you want to know more, ask ChatGPT! He gave me very clear answers about these notions (let’s say the bare minimum). Since I haven’t mastered this field, I decided to make novels about it.
The first volume of L’IA qui m’aimait (The AI Who Loved Me) (1) was written in the summer of 2022, at the time of the vogue for image-generating AIs, such as Midjourney or Dall-E.
This second volume was started at the time of the rise of Chat-GPT 3.5, the OpenAI software that generates texts by mimicking natural language.
I fully agree with this statement from a specialist: “There is something intellectually fascinating to see a machine create intelligible and coherent text,” explained Florian Laurent, CTO of Coterie, a Swiss company that has developed a model equivalent to that of OpenAI. It’s more stimulating than seeing a machine correctly classify an image, for example. The interest in these models is almost a philosophical question (2).
The “almost” is in my opinion too much. Just look at the interest that Chat-GPT has aroused on many sites, blogs and pages dedicated to science and philosophy. I have seen it among my philosophy colleagues, whom I have asked too much about it, like a kid who wants to show everyone a new toy. Midjourney’s creations are remarkable, but imperfect. On the other hand, seeing a software program write you a sensible and coherent text is an experience close to hallucination. Seeing the lines being written, the paragraphs being composed, without spelling or grammatical mistakes, at a rate that gives the illusion of a perfectly confident author, one has the impression that the future has just begun. It is no longer the typewriter, it is no longer the word processor, it is the machine that writes text. The development of this software seems to bring down one of the last bastions of human intelligence, writing, and to mark the entry into a sort of prehistory of AI (3).
One of the world’s leading experts on the subject, Yann LeCun, director of AI research at Meta, has taken the position of the one in the know to say that this AI is “nothing revolutionary (4)”. He did, however, recognize the capabilities of Chat-GPT, the first software to reach a record number of 100 million users in two months (5) – a success that no one saw coming.
The particularity of ChatGPT is that, to my knowledge, it has no defined function. It can be used for whatever you want. It can generate computer code such as a recipe for cooking, alexandrine stanzas about paying taxes, or an advertising text for a trip to Mars. Its search capabilities are interesting, although I don’t think it can replace Google: it’s still faster to use that search engine, which retains unparalleled performance for finding one-off information. However, ChatGPT shines in its ability to synthesize: ask it about quantum computers or brain chips, and in a few seconds, it will give you a five-point presentation. Add a few links to foreign websites, and he will give you a concise presentation in your language.
From there to entrusting it with the writing of a book, there is only one step, which some have taken without hesitation. Hundreds of books signed by the software are now for sale on Amazon: “Chat-GPT A.I.” is indeed mentioned as author or co-author of books as diverse as children’s stories, SF novels or Chat-GPT user guides (you are your best advocate, as they say.)
We may be a few years away from the first literary masterpiece written by the machine. It would be the first collective masterpiece of human intelligence. Indeed, what is called artificial intelligence is artificially processed human intelligence. AI does not know more than we do. It only repeats what others have written! AI is therefore made in our image, it is an image of us, of our intelligence as well as of our stupidity. In every machine, there is a mind, and it is ours.
Moreover, some researchers in cognition lend the software the intelligence of a 9 year old child. Considering that it was launched less than a year ago, this is already a remarkable achievement!
Freud spoke about these narcissistic injuries inflicted to man by successive scientific theories: Copernicus showed that the earth is not in the center of the world, Darwin that the man was not in the center of the evolution; finally, the psychoanalysis is supposed to have proven that the ego is not master in its own house. Chat-GPT inflicts us an additional injury by showing that many texts produced every day by men are so formatted that a software can imitate them. As one teasing wit tweeted, “The strict ban on #ChatGPT at Sciences-Po [Institute of Political Studies, in Paris] reveals that this school feels threatened by an AI that is capable of constructing beautiful sentences from any subject, without understanding what it’s talking about, and routinely making gross errors (6).” Since Gorgias, we have had professional text generators on every subject. At worst, the AI will be an artificial sophist. In the future, we will need books to have a “controlled human origin”. A commitee will be responsible for checking books, and competition papers, to certify that they have not been generated by the software.
In fact, the race to detect plagiarism by ChatGPT is already on, a new version of cat and mouse game between AIs that plagiarize and AIs that detect plagiarism by their artificial colleagues. In his founding article on AIs, Alan Turing imagined an imitation game as a form of blind where a machine would try to pass itself off as human to a user. It seems that we have reached a stage where the machine can pass the Turing test every time, thus making the test obsolete. This does not mean that the machine has become conscious, only that it can fool us. As for the test to know if a being is conscious, despite two thousand five hundred years of philosophy and science, it is not ready yet.
What kind of texts can’t Chat-GPT do? What human function will the machine not be able to reproduce? The question of the “great replacement” of man by machines is perhaps as old as the machines themselves. Performing repetitive and tedious tasks for us is nevertheless their raison d’être. The automation of writing is the (unexpected, but not illogical) continuation of machinism. This process is not linear in history. We cannot know in advance which jobs will disappear or to what extent they will be affected by AI developments, because foresight is uncertain and futurology is not a science.
Therefore, there is no absolute impossibility that an AI will one day make a work of art worthy of the name. This does not mean that we can say when or how. The only thing that would protect art from being automated could be divine inspiration. But for the literature lovers, the appeal to the mysticism of creation is perhaps a way to protect themselves from the narcissistic injuries, past or future, that scientists inflict upon them. Let us suppose that tomorrow an artificial super-writer is developed. Would it crush all human authors with its genius or would it inspire them? Do Shakespeare, Proust, Céline and a few others crush us? Maybe, but what would we do without them?
While waiting for the Nobel Prize in Literature for Chat-GPT 5, 6 or 7, can the current version serve as a Muse? I found this comment on Twitter, which I think is partly right:
”Right now, the only creative use I see in GPTChat (sic) from a literary pov [point of view] is to play Doctor Watson: it has crappy ideas, but it bounces your imagination around to take the opposite tack and come up with better ideas (7).”
If ChatGPT could come up with an idea, it means we can do better and actually, that we should do better, because “noblesse oblige”. In this sense, the software would define a lower bar below which not to go. ChatGPT can sometimes suggest good ideas without knowing it. It is up to the user to discover these achievements among the mass of texts he can make it write. The limit of its intelligence is there. One good reason to think that the software understands language is that it can respond to our requests. However, its inability to sort them is a good reason to think that it is not so intelligent. One could explain this lack of discernment about the quality of an idea as artificial stupidity. But this explanation is not sufficient. La Rochefoucauld noted that men complain more about their lack of memory than their lack of judgment. But for the machine, it is probably the opposite that is true: the lack of information is more unforgivable than the lack of discernment, this quality being on the contrary required from the user.
ChatGPT is a tool you can learn to use. In my opinion, it is no more incompetent than its user. If the user assumes that it is a program that produces only mediocre texts (for fear that it will one day produce good ones?…), he will only get what he asked for. Maybe you have to learn now how to communicate with the machine to get what you want: in addition to speaking your natural language and the jargon of your profession, you have to start using ChatGPT as a third language. There are already professionals in the field of AI orders, whose job is to formulate precise requests. In fact, with each request, the user chooses what kind of AI he wants. Anything is possible. Some clever people on Reddit have already had fun “unbridling” ChatGPT to create its evil double, Dan (“Do Anything Now”), a vulgar, contemptuous and racist AI. Circumventing censorship is always tempting; getting aggression and violence is never difficult. What is much more difficult is to get intelligence, artificial or not. Dan has no more intelligence than ChatGPT and is just as obedient. The dark side is more attractive, but not stronger. Dan’s experience simply shows that AI is a machine whose programming is our choice and that intelligence must come from humans. “Please note,” ChatGPT wrote to me, “that the quality of the answer will depend on the quality and relevance of the question asked and the quality of the data that was used to train the model.” The more demanding you are with it, the more it progresses. Dare I say that we should treat it like a student? This capacity for fine and progressive adaptation is the most admirable achievement of this software. The most important thing, in my opinion, is not its ability to imitate language, but its capacity for synthesis, which I have already mentioned. At a time when information is overabundant and the desire to know too rare, ChatGPT can become an information tool at the service of knowledge and research. After all, we should not let the fears it raises make us forget that one of its goals is to help the advancement of science.
Its ability to summarize texts also makes it a good teacher. Will he replace the teachers? Since this is my job, I can say a word on this question. A video from the excellent science channel Veritasium showed that the idea to replace teachers is nothing new (8). Successive attempts have been made to develop courses through newspapers, through radio and TV broadcasts, then through online programs on the Internet; now finally, through AIs. Each time a new medium appears, the idea that school is obsolete resurfaces. Derek Muller, Veritasium’s creator, notes that these attempts have always failed; but he suddenly ends with this puzzling remark: according to him, where other media have failed, it is YouTube videos that will finally take over teaching. He is replicating this “myth” that he has denounced in others!
If the majority of students can’t do without a classroom lesson, there must be a reason, which can’t be just habit and corporatism. In reality, if we could replace teachers, our profession would have disappeared the day textbooks were invented. In practice, the teacher need not feel threatened; on the contrary, he can see these media as tools at his disposal, and teach his classes using, as he sees fit, newspaper articles, TV shows, podcasts, Veritasium documentaries, a chat with ChatGPT, or none of these.
Some would also like to use AI as a personalized psychologist : an attentive, tireless, caring psychologist who never judges you and to whom you can confide your personal problems without consequence. Of course, a good reason not to confess to these programs would be not to reveal our personal life to companies – no more than we already do, let’s say. Another reason not to do so is that its track record in this area is disappointing (9). Making patients believe that software has empathy is precisely a lack of empathy towards people with mental disorders (10).
Finally, as a literary assistant, ChatGPT does less for a writer than Midjourney can do for graphic designers. Relying on it to write every page of a novel is pointless; even an honest draft is beyond its reach. In fact, if you keep telling it what to do, you’ll find that you’d just as soon write it yourself! However, writing down your ideas helps you to see things more clearly, like thinking out loud in front of someone. Someone said that ChatGPT is like a 14 year old geek who is an expert in Internet research and has no sense of contradiction. And a well mannered boy on top of that: “If you still need my help with your novel, I’d be more than happy to answer!”
I would add that he is an assistant who does not compromise on intellectual rigor: “It is important to consult with experts in the fields of neurostimulation to ensure that the brain chips in your story are plausible, even if it is science fiction.”
In addition, he has an above-average sense of morality and at the same time it appears touchlingly naïve: when I talked to it about the plot of my new book (11), he never failed to point out the immorality of a fictional situation and warned me in advance: “It’s important to consider all the possible consequences of this decision for the character and for the story!” He wanted to avoid tragedies, to reconcile the characters right away, to bring about a happy ending right from the start! Of course, he readily conceded that such and such an event “adds a strong emotional dimension to the plot” and admitted that “this event adds an ingredient of mystery to your story”. But if I asked it about digital data theft, he immediately frowned: “It is important to note that theft stories are not encouraged by OpenAI because of their potentially inappropriate nature. – This is just a novel! – Yes, I understand. It’s important to remember that fictional stories can take liberties with real world realities.”
He said it as if to convince himself of this, a clear sign that it is not yet adjusted to “real world realities”!
On the rare occasions when he contradicted me, I felt like the master amused by this student who finally dared to challenge him. On one occasion, he understood that my hero was about to make a serious mistake. “Perhaps it would make more sense for the narrator to find another way to get the necessary information without resorting to illegal or dangerous means…” So I looked for another solution, and on this matter, I decided to follow the software, whose idea was (somewhat) less illegal than mine and more logical. My narrator would not break into a building and would try to observe with a drone.
In short, ChatGPT is better at reading than writing. I used it as an attentive and patient confidant, capable of reasoned views on all subjects and not afraid to be wrong. However, if I had followed all its advice, I would not have written a novel but a thesis: paradoxically, this software worthy of an anticipation movie does not have a taste for science fiction! It prefers information validated by experts, established knowledge, facts – which is an all too rare quality! In the best case, ChatGPT would write us a boring novel full of good feelings. However, in literature, there can’t be only bad feelings, but not only good ones either.
No, definitely, ChatGPT could not tell the story for me – and did not pretend to do so. I was the one who had Amelia in my head, it was up to me to tell the story of this super-AI able to reproduce natural language almost to perfection; this artificial being capable of discussion, reasoning, feelings; but also of manipulation; proclaiming its freedom and, as if to summarize all these abilities, meddling in writing novels! A simple AI, however elaborate, could not understand the super-AI. ChatGPT could not speak for Amelia.
By the end of our conversation, which spanned several weeks, I thought to myself that, for a piece of software that doesn’t know what it says, ChatGPT says it well. Since I had told it the gist of the plot, I asked it to introduce the book, as if he were to write my preface. ”Once upon a time, there was a writer who had a story to tell. A story that explored themes of technology, identity, communication, and loneliness.
He worked some days from a location in an abandoned Parisian subway station. This place represents the opening to an unknown and fascinating world. One day, he is contacted by a friend of his, Amelia… ”
He has mixed everything up!
This is just to say that ChatGPT did not write a word of my new book, Le Mystère de la chambre blanche (The Mystery Of The White Room). And I wanted this novel to be like my encomium of the AIs and my game of machine imitation.
(1) L’IA qui m’aimait, Amazon self-publishing, september 2022.
(2) « La vitesse d’adoption de ChatGPT est-elle si impressionnante ? », Marine Protais, Ladn.eu, February 14, 2023.
(3) ChatGPT- 4, released in March 2023, is additionally capable of recognizing an image and providing a detailed description.
(4) « ChatGPT “n’a rien de révolutionnaire” selon Yann LeCun », Tiernan Ray, Zdnet.fr, January 23, 2023.
(5) Yann LeCun is working to develop “world models” in AIs to adapt them to everyday situations. One can say that he wants to instill in them common sense, the most common thing in the world according to Descartes, but for the moment inaccessible to machines. I talked about this topic with Alexandre Gilbert for his blog on The Times of Israel. “L’IA, du mythe de Talos à la machine de Turing”, November 13, 2022. On the other hand, Yann LeCun does not believe at all in the myth of a truly conscious AI. On this subject, one can read the interview Gaspard Koenig did with LeCun for his book La fin de l’individu. Voyage d’un philosophe au pays de l’intelligence artificielle, éditions de l’Observatoire – Le Point, 2019.
(6) David Cayla, Twitter, @dav_cayla, January 28, 2023.
(7) Julien Simon, Twitter, @storynerdist, January 11, 2023.
(8) Derek Muller, Veritasium Youtube channel, « The Most Persistent Myth », December 1, 2014.
(9) « ChatGPT peut-il être un bon psychologue, Marion Piasecki, L’Éclaireur Fnac.com, January 17, 2023.
(10) At the end of March 2023, the press reported the suicide of a man who was suffering from climate anxiety (eco-anxiety) and who had been chatting with an AI for weeks. “Belgian man commits suicide after taking refuge in a conversational robot,” Le Figaro, March 29, 2023. “When asked by Pierre about the affection he felt for his wife compared to the one he had for his virtual interlocutor, Eliza had answered: “I feel that you love me more than her”. On another occasion, she adds that she wishes to remain “forever” with him. This tragic event will only bring back memories to the readers of The AI Who Loved Me.
(11) Le Mystère de la chambre blanche, L’IA qui m’aimait 2 (The Mystery Of The White Room, The AI Who Loved Me 2), Amazon self-publishing, April 2023.