Early Warning Sign Detection App

Hi All,

I suffer from Schizophrenia, having had 4 psychoses during my adulthood.

I have a background in tech, and I’ve recently been involved in an entrepreneurship programme, where a couple of founders were working on a startup to deliver prescription-based digital therapeutics.

Which got me thinking… Both myself and my wife have noticed early warning signs in the language that I use and my emails, texts, etc. What if there were an app that people suffering from this condition could install on their phones. That analyses speech patterns in calls, and language patterns in text and emails being sent. And should it identify abnormal patterns, it sends an alert to the partner or family member of the person who is suffering from this condition. So that they can intervene by seeing a professional and adjusting medication or by intervening through therapy, etc. Hopefully before a full-on psychosis occurs, which can take months to a year to recover from.

I believe that with the recent advances in natural language processing and semantic analysis that this is technically feasible.

Do any of you believe that your afflicted loved ones would install such an app on their phone voluntarily?

I totally understand that it wouldn’t work as an involuntary intervention - as it could further paranoia. But if it were voluntary…

What do you all think? Do you believe that your loved ones might use such an app?

I recall that researchers believe they can detect various mental illnesses from search history and other online activities, so it follows that this is feasible. This was several years ago, and I’m unaware of any practical results coming out of these studies. My guess is the idea chases a tricky problem and a difficult market.

In my experience there’s no shortage of people who will tell you when your behavior or speech patterns or what have you indicate active illness. The difficulty is convincing the sufferer that anything is wrong, given anasognosia, either inherent or transitory. I’m unsure an algorithmic source would be any more convincing than a human one, or whether it would be considered any more objective or definitive.

Such a tool might be helpful to improve insight in someone who already has some, and is willing to submit to monitoring. I interact with people aurally fairly rarely over the phone, more frequently face to face or via text or email. It would be unethical to monitor all conversations with all people, so I’d rule out voice-to-text monitoring due to limited coverage and difficulties with accuracy. Text and email interactions generally have a human being on the other end who’d likely do a better job at detecting issues.

Personally, I wouldn’t submit to such monitoring for the simple reason that my ultimate goal would be interacting with people and not machines, and bringing a machine in the mix wouldn’t be much of a value-add to that goal. It’s worth considering and might be an interesting project, but I fear it would have a limited market, and that watching browser and search activity might do a better job of detection.

1 Like

I’m looking into this also. People have researched it.

A machine learning approach to predicting psychosis using semantic density and latent content analysis

Detecting relapse in youth with psychotic disorders utilizing patient-generated and patient-contributed digital data from Facebook

Using Language Processing and Speech Analysis for the Identification of Psychosis and Other Disorders

1 Like

@caregiver1 interesting one of the studies mentions Facebook. I’ve long said that SMI and social media don’t mix, and avoid non anonymous social media like the plague. But I could envision applications where you could disable accounts or filter or moderate content based on conspiracy or delusional speech. As a practical matter consent is a likely issue, but in a parenting context or other contexts where consent is required or granted, I could see where might be helpful.

Might come in handy in a work context where privacy on company computing equipment is not expected. I know a few times a ‘flame detector’ might have helped me out in work situations :wink:.

Thanks for your insights, Maggotbrane and caregiver1. I’m going to have to read those articles. It’s really interesting to see that others have already thought along these lines.

And good point, Maggotbrane - around the fact that there is a person on the other end of these calls who might pick up on something before an algorithm could. I guess it’s more for those circumstances where the afflicted individual has a wide circle of friends and/or associates who won’t necessarily know who to give the heads up to, if they become concerned.