Mental Health
Mental Health

Crisis Text Line tried to monetize its users. Can big data ever be ethical?

The crisis intervention service had concerns about its financial future, but made a huge mistake.
By Rebecca Ruiz  on 
A woman looks down at her phone, against a blank background.
Crisis Text Line tried to make a business out of user data. It went terribly wrong. Credit: Vicky Leta / Mashable

Years after Nancy Lublin founded Crisis Text Line in 2013, she approached the board with an opportunity(opens in a new tab): What if they converted the nonprofit's trove of user data and insights into an empathy-based corporate training program? The business strategy could leverage Crisis Text Line's impressive data collection and analysis(opens in a new tab), along with lessons about how to best have hard conversations, and thereby create a needed revenue stream for a fledgling organization operating in the woefully underfunded mental health field. 

The crisis intervention service is actually doing well now; it brought in $49 million in revenue in 2020(opens in a new tab) thanks to increased contributions from corporate supporters to meet pandemic-related needs and expansion, as well as a new round of philanthropic funding. But in 2017, Crisis Text Line's income was a relatively paltry $2.6 million(opens in a new tab). When Lublin proposed the for-profit company, the organization's board was concerned about Crisis Text Line's long-term sustainability, according to an account recently published by founding board member danah boyd(opens in a new tab)

The idea of spinning off a for-profit enterprise from Crisis Text Line raised complex ethical questions about whether texters truly consented to the monetization of their intimate, vulnerable conversations with counselors, but the board approved the arrangement. The new company, known as Loris, launched in 2018 with the goal of providing unique "soft skills" training to companies. 

It wasn't clear, however, that Crisis Text Line had a data-sharing agreement with Loris, which provided the company access to scrubbed, anonymized user texts, a fact that Politico reported last week(opens in a new tab). The story also contained concerning information about Loris' business model, which sells enterprise software to companies for the purpose of optimizing customer service. On Monday, a Federal Communications Communications Commissioner requested the nonprofit cease its data-sharing relationship, calling the arrangement "disturbingly dystopian" in a letter(opens in a new tab) to Crisis Text Line and Loris leadership. That same day Crisis Text Line announced that it had decided to end the agreement(opens in a new tab) and requested that Loris delete the data it had previously received.

"This decision weighed heavily on me, but I did vote in favor of it," boyd wrote about authorizing Lublin to found Loris. "Knowing what I know now, I would not have. But hindsight is always clearer." 

Though proceeds from Loris are supposed to support Crisis Text Line, the company played no role in the nonprofit's increased revenue in 2020, according to Shawn Rodriguez, vice president and general counsel of Crisis Text Line. Still, the controversy over Crisis Text Line's decision to monetize data generated by people seeking help while experiencing intense psychological or emotional distress has become a case study in the ethics of big data. When algorithms go to work on a massive data set, they can deliver novel insights, some of which could literally save lives. Crisis Text Line, after all, used AI to determine which texters were more at risk(opens in a new tab), and then placed them higher in the queue. 

Yet the promise of such breakthroughs often overshadows the risks of misusing or abusing data. In the absence of robust government regulation or guidance, nonprofits and companies like Crisis Text Line and Loris are left to improvise their own ethical framework. The cost of that became clear this week with the FCC's reprimand and the sense that Crisis Text Line ultimately betrayed its users and supporters. 

Leveraging empathy

When Loris first launched, Lublin described its seemingly virtuous ambitions to Mashable: "Our goal is to make humans better humans."

In the interview, Lublin emphasized translating the lessons of Crisis Text Line's empathetic and data-driven counselor training to the workplace, helping people to develop critical conversational skills. This seemed like a natural outgrowth of the nonprofit's work. It's unclear whether Lublin knew at the time but didn't explicitly state that Loris would have access to anonymized Crisis Text Line user data, or if the company's access changed after its launch.

"If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced," wrote boyd, who referred Mashable's questions about her experience to Crisis Text Line. "If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy." 

"I wanted to help others develop and leverage empathy." 

But at some point Loris pivoted away from its mission. Instead, it began offering services to help companies optimize customer service. On LinkedIn, the company cites(opens in a new tab) its "extensive experience working through the most challenging conversations in the crisis space" and notes that its live coaching software "helps customer care teams make customers happier and brands stand out in the crowd." 

While spinning off Loris from Crisis Text Line may have been a bad idea from the start, Loris' commercialization of user data to help companies improve their bottom line felt shockingly unmoored from the nonprofit's role in suicide prevention and crisis intervention.  

"A broader kind of failure"

John Basl, associate director of AI and Data Ethics Initiatives at the Ethics Institute of Northeastern University, says the controversy is another instance of a "broader kind of failure" in artificial intelligence. 

While Basl believes it's possible for AI to unequivocally benefit the public good, he says the field lacks an "ethics ecosystem" that would help technologists and entrepreneurs grapple with the kind of ethical issues that Crisis Text Line tried to resolve internally. In biomedical and clinical research, for example, federal laws govern how research is conducted, decades of case studies provide insights about past mistakes, and interdisciplinary experts like bioethicists help mediate new or ongoing debates. 

"In the AI space, we just don't have those yet," he says. 

The federal government grasps the implications of artificial intelligence. The Food and Drug Administration's consideration of a regulatory framework for AI medical devices(opens in a new tab) is one example. But Basl says that the field is having trouble reckoning with the challenges raised by AI in the absence of significant federal efforts to create an ethics ecosystem. He can imagine a federal agency dedicated to the regulation of artificial intelligence, or at least subdivisions in major existing agencies like the National Institutes of Health, the Environmental Protection Agency, and the FDA. 

Basl, who wasn't involved with either Loris or Crisis Text Line, also says that motives vary inside organizations and companies that utilize AI. Some people seem to genuinely want to ethically use the technology while others are more profit driven. 

Critics of the data-sharing between Loris and Crisis Text Line argued that protecting user privacy should've been paramount. FCC Commissioner Brendan Carr acknowledged fears that even scrubbed, anonymized user records might contain identifying details, and said there were "serious questions" about whether texters had given "meaningful consent" to have their communication with Crisis Text Line monetized.

"The organization and the board has always been and is committed to evolving and improving the way we obtain consent so that we are continually maximizing mental health support for the unique needs of our texters in crisis," Rodriguez said in a statement to Mashable. He added that Crisis Text Line is making changes to increase transparency for users, including by adding a bulleted summary to the top of its terms of service.

"You're collecting data about people at their most vulnerable and then using it for an economic exercise"

Yet the nature of what Loris became arguably made the arrangement ethically bereft. 

Boyd wrote that she understood why critics felt "anger and disgust." 

She ended her lengthy account by posing a list of questions to those critics, including: "What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others?" 

When boyd posted a screenshot of those questions to her Twitter account(opens in a new tab), the responses were overwhelmingly negative, with many respondents calling for her and other board members to resign(opens in a new tab). Several shared the sentiment that their trust in Crisis Text Line had been lost.

It's likely that Crisis Text Line and Loris will become a cautionary tale about the ethical use of artificial intelligence: Thoughtful people trying to use technology for good still made a disastrous mistake.

"You're collecting data about people at their most vulnerable and then using it for an economic exercise, which seems to not treat them as persons, in some sense," said Basl. 

If you want to talk to someone or are experiencing suicidal thoughts, call the National Suicide Prevention Lifeline(opens in a new tab) at 1-800-273-8255. Contact the NAMI HelpLine(opens in a new tab) at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected] Here is a list of international resources(opens in a new tab).

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Prior to Mashable, Rebecca was a staff writer, reporter, and editor at NBC News Digital, special reports project director at The American Prospect, and staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a Master's in Journalism from U.C. Berkeley. In her free time, she enjoys playing soccer, watching movie trailers, traveling to places where she can't get cell service, and hiking with her border collie.


Recommended For You

6 ways the White House is reining in AI

TikTok promotes mental health outreach with $2 million advertising fund

What is the best dating app? This guide can help you figure it out

3 surprising ways to cope with climate change

More in Life
Lego's 'Pac-Man' set is made for '80s arcade lovers

Netflix documentary ‘Victim/Suspect’ digs into systemic scrutiny of sexual assault survivors




Trending on Mashable
Wordle today: Here's the answer and hints for May 24

Gen Z is challenging the way we date, says Tinder report

These new telescope images of the sun are just spectacular

A huge star just exploded, and you can actually see it

No, Elon Musk can't run for U.S. Vice President
The biggest stories of the day delivered to your inbox.
By signing up to the Mashable newsletter you agree to receive electronic communications from Mashable that may sometimes include advertisements or sponsored content.
Thanks for signing up. See you at your inbox!