A.I. Malpractice: A Guest Blog
The following was written by Tova Hope-Liel, a student at the University of Haifa in Digital Humanities (a joint program of English Literature and Information Systems). I thought it was important, so I’m sharing it here.
For those of you who don’t know what an LLM is, this is the definition I found Googling:
A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks. LLMs are trained on huge sets of data — hence the name “large.” LLMs are built on machine learning: specifically, a type of neural network called a transformer model.
The issue with “an LLM which teaches children how to speak on online applications like Whatsapp, etc” is an ethical miscarriage so deep that it goes to the crux of the issues with AI, LLMs, and the future of humanity. And no, I am not exaggerating.
This project was one of many suggested to the University of Haifa’s Information Systems degree this year, and will be presented at the University’s project fair.
Yes, you read that correctly, someone has suggested and is trying to produce an AI which will “train” children to “behave appropriately online.” There is no circumstance under which this is ethical or conducive to a better world.
The issue stems, like many AI issues, from a deception. The project claims it will help children by teaching them to not conduct verbal and social faux pas, but I will prove in this article that there is no way for this to be done ethically.
Children of the age targeted by this project are those in the prime of rebellion. Tweenagers and older are constantly pushing boundaries stylistically, socially, and mentally. In a whirlwind of hormones, they battle pressures from their peers, parents, and teachers. As they begin to be inducted into adulthood, they test the social contract erected around them: What is polite, what is proper, what is expected of an adult. Moratoriums are essential phases of maturation and exist in almost every culture. Unlike what might be expected by adults, children do not need a safe space to be good.
You might say that this LLM will create that, a place where they won’t violate social mores and will still be a safe space. It will be better than a safe space to be good, it will be a safe space to be bad.
Except that isn’t what children need. To be “bad” implies some sort of social (or, if you’re the fire and brimstone type, spiritual) punishment. A child does not need a safe space their whole life. They need to eat metaphorical dirt to help their immune system grow.
It sounds horrible, it sounds mean, it sounds counter-intuitive. Everyone knows that “children can be cruel” and this could be a solution to that, right?
The problem with children being cruel is not that the children are mean. It is that they are mean with no consequences. Children who are cruel to others are not punished for their indiscretions. While the common trope (and slight statistical support) that children who have been mistreated will mistreat others might be true, the fact remains the children who mistreat others are neither punished nor taught the social errors of what they have done. They are punished for something else. A child who grows up in a bubble will swoon as soon as they step into the real world, and a child who eats nothing but sand will always get sick. All things in moderation. All children should eat a little bit of dirt.
So what if they eat a little dirt, and say the wrong thing, want to bully another child? This could cut cyberbullying rates in half! Could it really? Could it create the utopia you’ve been craving where children never sin? You wouldn’t even have to lift a finger! How cool is that?
Never mind that what children are supposed to be taught at this age is how to solve problems on their own. Never mind that this would be a computer, which is not a person, impressing ideas onto children.
“You’re an alarmist! A crazy conspiracy nut! It would be trained by a human, duh! It could reach plenty of this new age of children better than we ever could, what with their skibidi toilets and other silly little interests. I could never understand them, and therefore, I could never sympathize with them, because I cannot empathize with people I do not believe are sentient beings. So a child is so far away from me that a thing that has no thoughts of its own is the only thing that can mimic sympathy. It’ll be personalized/customized/trained to the child. It will learn the child’s thoughts and feelings and will parrot back to them The Social Contract as mediated by someone I do not know and have no control over. It will teach my child better than I ever can. It will parent my child better than I care to do. It will love them. That’s practically the same thing.”
If you believe that a computer training a child and a parent teaching a child are so similar, then why do you hand off the creation of your child’s ethics, values, and morality to someone else? If you could do it just as easily, why don’t you? Are you too busy? Or are you just uninterested?
But enough about the children, who will be harmed by the propagation of this app, let’s talk about the only thing adults truly care about: adults.
Children can be cruel, and yet, adults can be even crueler. An adult who does not raise their child to understand the social contract’s existence, much less its values, is not a parent. They’re not even a babysitter. They’re nothing.
Whether or not a program “can” parent your children is irrelevant. Infinite monkeys can parent infinite children eventually. It’s about whether or not a program should. What does that mean for you? What have you done today to teach your child right from wrong? To connect with your child who you supposedly care for? To parent your child? Isn’t that the point of parenthood? To care?
Those who support the integration of AI into every aspect of humanity claim that AI will never truly replace humanity, and yet, have no answer for what humanity is supposed to do with all of these tools that conduct our experiences for us. Because that is what a majority of AI programming does, as it is sold to the consumer today. It simulates experiences. It makes everything “accessible” to the layman. It creates a “safe space” for something new. Except that dirt is part of life. Nothing is pure, nothing is wholly sacred, and there is no perfect person or utopia.
If you use this program, support this program, or even let it pass through your mind unexamined, then you might as well have been the child forced to use it before they try to express themselves without mediation to their friends. And if Gen Alpha only listens to computers anyway? Then you’ve already shown them they have no use for you. A parent who has no authority is not a parent at all. A person who has no authority is not an adult. They have no rights.
You thought Gen Alpha had social anxiety before? Now lets place them in front of a machine that tells them whether or not they should be ashamed of the way they express themselves in private.
Be angry. You’ve just stolen the personhood of a generation.