There has been a great deal of commotion around a recent news article (Click here to see first link and here to see second link) that spoke of Facebook conducting a week long experiment in the emotional manipulation of a few hundred thousand of its members. This happened in 2012 and involved directing positive versus negative news posts to see how this affected the mood of the target audience.

Almost unanimously, the general community has called this human experiment unethical and possibly illegal. The authors of the study, which was published in a medical journal, defended their actions and still feel that the potential harm was so small as to be nearly undetectable. You can see their complete response via the links at the top of this post.

While not justifying Facebook’s actions, many people wrote in to say that such behavior was not surprising and was most likely widespread, at least across the entire social media world. Basically, this is similar to the argument regarding performance-enhancing agents in competitive sports like the Olympics and the Tour de France. If everyone is using them, is it still unethical? In fact, if everyone is using them, why is it even illegal? There is a practical rationality to this argument. The danger, which ultimately concerns the general public, is that the same logic can be used to justify almost anything as long as the ends are worthwhile. So, according to this argument, we all know that there is no privacy and that the NSA knows everything about us. So why not stop wasting our time trying to protect ourselves with better and better firewalls. We should just save our money and spend it on a trip to the Bahamas.

One very legitimate question was related to how the medical journal even allowed itself to publish the resulting conclusions. There is a very complex process that one must go through in order to do studies on humans. One of the basic principles is that the people being tested must not only be aware of the testing, but must give their express permission to be tested. Facebook did not do this. I understand why – they were concerned that only studying people who had agreed to be studied would skew the information. In the end, they claim that the result was that people produced an average of one fewer emotional words per thousand words over the following week, when presented with positive news. I have to say that independent of how this information was collected, the conclusion is very important. We should not in any way or form minimize the potential effect that online social media has on us. Our moods could very well change after just a couple of minutes online. If one wishes to consider the potential danger in this, a disreputable individual or company could seriously manipulate the emotions of a population via nearly undetectable changes in the content being published. As a world, we need to know how vulnerable we are.

This study actually works against Facebook’s interests. There are many people who already argue that Facebook has too much power. However, Facebook is far from the only online service that has a large audience and can manipulate that same audience. What is fascinating in this observation is that the intent of a publisher might be for the general good of the public, but in fact might still have a very negative effect on some.

When any company purchases a huge billboard ad for a particular product, there will be some people who are incredibly offended by that ad. Imagine a socially conscious advertisement in favor of breast-feeding. There are many women who have to forgo breast-feeding for any number of reasons, and these women may very well be upset by the reminder that breast-feeding has many benefits. Considering how many women suffer from postpartum depression, and how dangerous such depression can be, such an advertisement could literally push such a woman over the edge, if she is having difficulty with breast-feeding.

The ultimate origin of the following statement may even be biblical, but most people know it as words of warning to a young Peter Parker who has just acquired his powers as Spiderman. His uncle Ben says to Peter “with great power comes great responsibility”. Never before has a single website like Facebook reached billions of people. And it will not be long before the entire world is online constantly. Smarter and progressively cheaper phones will allow even the poorest to take full advantage of the Internet and all of its websites. Now take into account that a given statement or discussion might be understood totally differently based on the culture of the person reading the post. To what extent can a company be held responsible for the way in which its content affects the mood of the people reading it? To what extent does a company need to consider the cultural sensitivities of the entire world? I don’t pretend to have the answers. As I said, this has never been a practical question until now.

There may very well be an individual or group of people that decide to sue Facebook for its actions. Ultimately, even if Facebook is fined hundreds of millions of dollars, it won’t have any serious effect on the company. And considering the public backlash, I suspect that Facebook has learned not to perform such a study again in the future. I would not be surprised if Facebook has already assembled a team of lawyers and ethicists to guide any future research that it does.

When all is said and done, hopefully no one was hurt by the study. I personally am happy to know the result as it can be used to rein in the actions of any popular website. But let me leave you with this question: if you had been asked by Facebook to be part of a study which was almost imperceptible to you, and you are offered some form of reward [money or special access to added functionality], would you still have said no to being a participant in the study?