The Suicide Algorithm: Facebook’s Self-Appointed Role in Mental Health

Social media and mental health have a rather difficult relationship. There is no shortage of studies showing that the amount of time we spend on Instagram and Facebook can be actively detrimental to our confidence and self-esteem, with young people deemed particularly susceptible to such effects. A constant live-feed of other people’s activities and successes can certainly take a psychological toll, but social media platforms have also provided a valuable space for people dealing with mental health issues to discuss their lives in the company of similarly struggling strangers. On Facebook, Reddit, Twitter, Tumblr, and elsewhere, people have come together to share common experiences, discuss the shortcomings of the medical industry, lend an ear and offer help. These virtual communities are often a significant source of support in a world of inaccessible healthcare and overstretched services, plugging the gaps where treatment fails.

One of the most important functions of these hidden pockets of the Internet, stashed away in private groups and limited-access forums and kept troll-free by dutiful volunteer admins, is that they provide a space for people to vent outside of the confines of the medical community. Admitting suicidal feelings to a mental health professional or mandated reporter can result in being taken to hospital against your will, a traumatic experience that may not necessarily speak to the severity of the situation. A lack of professional oversight may seem dangerous to some, but removing the risk of having the authorities called on you for speaking your mind is a vital relief for some people struggling with regular suicide ideation. You may be reaching out to a community of strangers, but real-world friends who are unfamiliar with depression and other mental health issues are likely to overreact to the expression of negative thoughts, leading to discomfort on both sides. Judgment-free spaces run by and for fellow sufferers removes the burden of tailoring your emotions for a mentally healthy audience and fear of non-consensual intervention.

This may not always remain the case. Since 2015, Facebook has implemented so-called suicide prevention tools, allowing people to “report” friends who have posted distressing content and presenting them with options to reach out for support. This move has been met with mixed reviews from the broader mental health community, though it drew significant support from mental health charities. British helpline Samaritans praised the feature in Glamour magazine, with representative Lynsey Pollard calling it “an opportunity for friends and Facebook to reach out to people who may be struggling and signpost them towards support.” When reported, the friend in question will be presented with a pop-up memo informing them that a friend “thinks you might be going through something difficult and asked us to look at your recent post” and suggesting that they either reach out to a helpline worker or follow some “simple tips” on how to work through their situation. By March of this year, Facebook developed algorithms that could detect content from potentially suicidal users and automatically flag them for attention, a move that has the potential to radically affect online mental health support communities.

As Facebook Live becomes increasingly well known for disturbing live-streamed deaths, there is a need for the company to be able to respond to crisis situations, and Pollard’s claim that “the potential benefits of offering support to someone who is struggling…outweighs any risk” certainly has some weight in specific situations. But on an interpersonal level, being anonymously reported by friends seems unlikely to foster a sense of trust and security for the struggling person, even less so if they suspect the helping hand being offered to them may just be an algorithm. Critics have suggested that it may well result in the reported person feeling paranoid about which friend flagged them as unstable, leading to a reluctance to speak about their problems at all. An anonymous tip is far more detached than someone reaching out personally, and while taking on responsibility for someone else’s mental health is by no means obligatory, Facebook’s attempt to emulate real human comfort seems bound to leave some people feeling even more isolated than before.

Facebook alert

Facebook

Depression

iStock.com / max-kegfire

Concerns about the impersonal nature of the reporting system aside, Facebook’s relationship with its mentally ill users is a lot less friendly than merely checking in from time to time. Earlier this year, documents from the company’s Australian branch were leaked, revealing that Facebook has developed algorithms to gauge the mood of teenagers and use that information to create targeted advertising. According to documents acquired by The Australian, Facebook has become quite effective at psychologically profiling its user base, detecting a variety of teenage emotional states such as  “worthless,” “insecure,” “defeated,” “anxious,” “useless,” “overwhelmed,” “stressed,” and “a failure.” Advertisers may be able to use some of this information to push their content effectively, with The Australian claiming that teens could be targeted for being in a mood where they were interested in “looking good and body confidence,” “working out and losing weight,” or were deemed to “need a confidence boost.”

Facebook denied that their research was intended to exploit young people’s emotional states for profit and released a vaguely-worded statement claiming that such information was “never used to target advertisements.” In a piece for the The Guardian, former Facebook executive Antonio-Garcia Martinez claimed that while he personally was not aware of emotionally-targeted advertising, the company is wholly capable of doing so and pretending otherwise is “lying through their teeth.” The fact that Facebook has been interested in monitoring the emotional effect of its platform on users for some time is undeniable—back in 2014, the company came under fire for secretly manipulating users’ news feed data to see if it had an impact on their mood, finding that being shown negative content seemingly resulted in a more negative mindset. Their research subjects were not informed of the experiment, raising angry questions about consensual use of data by social media companies.

Though there is not a wealth of statistical information on the topic, many people, myself included, can confirm that expressing negative emotions and talking about mental health issues online will get you inundated with ads for online therapy programs with dubious credentials. The legitimacy of these professionals is presumably not really Facebook’s problem; nonetheless, it’s clear that Facebook are consciously making money from mental health problems at the same time that they are rolling out suicide prevention measures.

Even the mere presence of these advertisements can have a negative effect on mental health. Caroline Sinders, a machine-learning designer, told Vice that Facebook has an ethical obligation to allow users to opt out of these ads, because repeated exposure to ads implying that users are depressed and mentally ill is “really dangerous and invasive…. While algorithmically they may seem related to what was served up before, there is a lot of harm in the causal effects of how these things manifest.” In other words, being told that you seem mentally ill over and over again is a sure fire way to feel more mentally ill, even if that assertion is coming from an automatically generated sidebar ad.

People dealing with mental health issues are all too often afforded limited autonomy, considered incapable of proper decision-making by their loved ones and doctors. As Facebook becomes increasingly interested in pushing pseudo-diagnostic tools and ad content, consent and tact seem to be as of little concern as privacy and unethical profit margins. Before you report a Facebook friend for their mentally ill behavior, consider whether it would really make you feel any better to be informed that you seem crazy via an anonymous Facebook notification. Pointing out someone who needs help is easy—it’s finding adequate and appropriate care that is the hard part. Facebook cannot replicate this with an algorithm, and while they continue to collect data and predatory ad revenue from mentally ill users, they should not even try.