Over the last few weeks, I have read too many articles about people who have committed suicide to the surprise of those around them. Often, when friends and family are interviewed after the fact, they will comment on “a change” that they observed, but never thought was “so bad”. When the social media updates of these people are reviewed, you can sometimes pick up indications of an acute psychological crisis. I should point out that there is a tremendous amount of information in social updates that can indicate psychological stress, even the way in which one types the update.

Young people who commit horrible crimes also tend to leave indications as to their intentions on their emails and Facebook posts. I have heard many experts suggest that parents should be more aware of what their children are doing when they are online. At some point though, someone raises the issue of respecting the young person’s privacy, which limits the rights of anyone, including parents, to peruse the child’s socially shared, and not shared, information.

Putting the issue, of privacy rights versus keeping your loved ones safe, aside, there are options for creating tools that could warn guardians and friends of an impending human disaster.

Imagine if the major email/social companies provided a special feature, initially geared towards parents of children under the age of, say, 16,, that analyzed the text of all of their shared interactions. There are already programs and algorithms that can look at typed text and provide a likelihood that the person creating the text is depressed, suicidal or a danger to others. What if a parent could even pay for such a service, that would scan their children’s social media in order to warn the parent of a problem. There would not need to be any details in the warning – just a message such as “Your child has uploaded a concerning post”.

One of the things I remember learning when I studied a bit of adolescent medicine, is that acute changes in grades could be related to drugs, severe emotional distress, early psychiatric disease and the like. When one of my own children, years ago, did poorly on an exam [which was out of nature for this child] I asked the child right away if there were any problems, and I didn’t leave out anything from the list of options. The service I described above, that would programmatically look for indications of mental stress, is really not any different.

I’ll say one thing about privacy, in regards to such a service. I personally believe that privacy is a fantasy. Every other day, there are major breaches that expose tremendous amounts of “private” information to the Internet. I personally work with a zero expectation of true privacy. If someone really wants to read my emails, I know that they can get access in seconds. And I just accept this and go back to sleep.

In the world where we force our children to deal with very adult issues at far too young an age, I think that privacy is a luxury we cannot afford. I don’t expect everyone to agree with me. And that is why this would be a pay for, opt-in service. Additionally, you could even have a system whereby the child could be equivalently informed of the concerning text. So, if the latest post seems to indicate suicidal tendencies, the program being used could anonymously pop up a warning that shares this concern with the child. Based on the response of the child user, this concern could be shared with other people. These other people could even be a group of online social workers or counselors employed by the social media company.

I think the first thing I would want to do,  is to just scan anonymously all of the trillions of previous social media posts, to see if there were indications of dangerous behavior. The software could then continue to follow the posts of the same users who typed these dangerous posts to see if something did in fact happen. Admittedly, if the individual did Gd forbid, commit suicide, there would be a sudden stop in posts. But this would also be an indication of “a problem”. My point is that without breaking the confidentiality of anyone, companies like Facebook and WhatsApp and Twitter could test to see if there is an indication of dangerous behavior among their users. How they use that information, is a secondary question. But once the general public knows that the social media companies can even predict suicide and dangerous crimes, there will be pressure to share this information with the authorities.

The purpose of this post is truly not to get into any political or ethical argument about privacy and parental rights. There are a lot of people out there in horrible pain who just have no idea how to reach out for help. It might very well be that the last person they should be reaching out to is their parents [who may be the cause of the problem in the first place]. But we have data, and that data is telling us things that somebody wants to know, in order to protect people and to help them. Once again, I am only voicing my own opinion, but if one of my children was in trouble and the only way I could find out was to have such a service warn me, I would be willing to pay a great deal for it.

As a doctor and software programmer, I want to see technology help people and save lives. And now, for the first time in history, we have the CPU power and the data to do this.. I hope that someone eventually does.

Thanks for listening. And I would appreciate it if someone out there could help me and other concerned people listen better to the people we love.