A question is repeatedly asked in regards to websites like Facebook and Twitter. Specifically, many developers and decision-makers wonder how best to extract critical information from all of the posts on these sites. It has surprised even the owners of social media companies, how much one can learn about society from the information that people freely and consciously upload.  It is already possible to create a map of certain socially important issues, and to use that map in a proactive way [to identify potential voters, to focus in on regions with specific needs, to track spread of infection and more]. It is also possible to focus down to an individual and identify certain important trends in that person’s behavior. The question that comes from this is whether such information should be gathered and more so exposed.

In the following article, the writer describes a group by the name of the Samaritans. The Samaritans, a well-known suicide-prevention group in Britain, recently introduced a free web app that would alert users whenever someone they followed on Twitter posted worrisome phrases like “tired of being alone” or “hate myself.” The point of this software was to identify individuals who were demonstrating behavior consistent with self harm or even suicide. Clearly, the intention of the developers was noble.  Far too often, there have been news reports about young people who have succumbed to cyber bullying or real life oppressive behavior that drives them to suicide. If susceptible individuals could be identified by software, then the online community could step in and, at least, try to help the person.

It is not necessarily expected that strangers would act upon  such social media posts. On the other hand, true friends of the individual would likely welcome the ability to spot dangerous behavior and to act on it. Although the focus of this software is the prevention of self harm and suicide, one could potentially develop software that identifies signs of abuse by another, or alcoholic behavior or signs of drug abuse and more. Perhaps even a kidnapped child could be identified by contents of social media posts. I suspect that quite a number of doctoral dissertations in psychology and sociology are being delivered specifically on the topic of what we can learn from social media.

I admit that when I first read about this new type of software, I was very excited. More so, I was wondering why a social media site like Facebook did not already buy the company to fully incorporate such software into Facebook’s platform. But then, as I read on in the New York Time’s article, I came to understand the tragic dark side to such a feature.

If a person at risk for any type of self-destructive behavior is flagged in a public forum, this person would also become a potential victim of evil people who prey on the susceptible individuals in society. If a 15-year-old girl is posting about her loneliness, a sex offender could approach her online, playing the role of a concerned friend. In a short period of time, the sex offender would likely succeed in getting the 15-year-old to physically meet. The tragedy from this point on, is self-evident.

After these concerns were expressed, the Samaritans pulled the app. It was a very difficult decision for them, given that they knew that they could help so many people.  Nevertheless, they responsibly recognized that they could very well hurt many people as well.

I thought about this whole situation for a while. I came to a very simple conclusion. The problem is not the data (which is extremely important). The problem is that everyone can see it.

The very nature of social media is that individuals post personal information to a very public forum. I am just as guilty as many others. I recently posted photographs of my son and  wife, together with me, at a wedding. I was extremely proud to be with them and I wanted to show it off. So, I posted the pictures on Facebook and as expected, received very pleasant and supportive feedback from my circle of online friends. My point is that people will not stop sharing information. Telling someone that a post that they just uploaded demonstrates problematic behavior, will likely have no real effect, except perhaps to cause that person to switch to a different social media site, or more simply open a new account and not invite the “intrusive” friend.

But what if notifications about problematic posts were only directed at specific people. For example, a new service could be offered by Facebook whereby “emergency contacts” were identified  by the user. These emergency contacts, which would have to be at the very least Facebook users as well, could be the user’s parents, a teacher or a close friend. Facebook could present this feature as a free-form of insurance in the event that the user posted something such as “my car broke down”, “I need an urgent ride to school”, or even “I think I’m too drunk to drive. Can anyone help me”. But these emergency contacts could also be the people to receive the notifications when postings demonstrate problematic behavior.

The very simple point is that the user has already identified individuals who are sufficiently trusted to come to the user’s aid when he or she is in trouble. While this still does not guarantee that these contacts would be the ideal people to help with a suicidal condition, these contacts are surely safer than sharing such personal information with the entire Facebook or Twitter verse.

Nothing is perfect. But this is a far better solution than doing nothing or conversely, exposing highly personal information for all the Internet to read. I definitely think that Twitter and Facebook should explore such a protective monitoring feature, but they should do so with a team of psychologists, sociologists, specialists in Adolescent care and the like. These social media sites should even work together to put out a common set of standards for how such high-risk notifications should work and who they should be shared with.

Children are dying when there is a chance to help them. This option cannot be ignored. But the implementation of such an option must be done in the utmost of responsible ways. Even if one child is abused due to such sharing of sensitive information, all would agree that this is unacceptable and intolerable.

As Spiderman’s uncle told him, “with great power comes great responsibility”. This is an opportunity for the great and powerful social media sites to show how responsible they can be.

Thanks for listening.

Please feel free to view my website at http://mtc.expert