“Just awful” experiment points suicidal teens at chatbot
After getting in hot water for using an AI chatbot to provide mental health counseling, non-profit startup Koko has now been criticized for experimenting with young adults at risk of harming themselves. Worse, the young adults were unaware they were test subjects.
Motherboard reports the experiment took place between August and September 2022. At-risk subjects, aged 18 to 25, were directed to a chatbot after posting “crisis-related” keywords like “depression” and “sewer-slide” on Discord, Facebook Messenger, Telegram, and Tumblr. They were then randomly assigned to a group that received a “typical crisis response” (call the crisis hotline), or a “one-minute, enhanced crisis response Single-Session Intervention (SSI)” powered by AI.
Rob Morris, Koko co-founder and Stony Brook University professor, carried out the experiment with his psychology peers, Katherine Cohen, Mallory Dobias, and Jessica Schleider. The study says it aims to show social media platforms that pointing young adults to crisis hotlines isn’t enough. Morris says he wants to show that an AI chatbot intervention is more effective in supporting young adults struggling with mental health issues.
However, this appears to only look good on paper.
Before Koko performs what it was designed to do, it first presents its privacy policy and terms of service (ToS), telling users their anonymous data may be shared and used for research. Here lies the first problem: Consent to take part in the project is given by agreeing to Koko’s privacy policy and ToS. As we all know, a great majority of people online normally don’t read these. Presumably, it’s not the first thought for at-risk young adults either.
When asked about provisions for true consent, Morris tells Motherboard, “There are many situations in which the IRB would exempt researchers from obtaining consent for very good reasons because it could be unethical, or impractical, and this is especially common for internet research. It’s nuanced.” An IRB, or institutional review board, is also called a research ethics committee. Essentially, they’re the group protecting human research subjects.
The second problem involves data. The preprint reveals that subjects provided their age, gender identity, and sexual identity to the researchers. Such datasets may be anonymous, but studies show these can still be traced back to specific individuals with a high accuracy of 99.98 percent. “Most IRBs give a pass to ‘de-identified’ research as they claim there can be no privacy or security harms. But, in this case, they are collecting demographic information which could be used to identify users,” said Eric Perakslis, the chief science and digital officer at the Duke Clinical Research Institute, Motherboard reports.
And the last problem, which alarmed and appalled researchers and psychologists alike, was that the experiment was carried out as “nonhuman subjects research.” This means subjects have been stripped of due safety- and privacy-related protections.
“Completely, horribly unethical. Mucking around in an experimental manner with unproven interventions on potentially suicidal persons is just awful,” New York University bioethics professor Arthur Caplan was quoted as saying.
“If this is the way entrepreneurs think they can establish AI for mental diseases and conditions, they had best plan for a launch filled with backlash, lawsuits, condemnation and criticism. All of which are entirely earned and deserved.”
“I have not in recent years seen a study so callously asleep at the ethical wheel. Dealing with suicidal persons in this way is inexcusable,” he added.
Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.
TRY NOW
https://blog.malwarebytes.com/feed/