Will your future smart device give you a “red flag”?

Dariusz Sankowski on Pixabay.  Altered.  Used with permission.

Source: Dariusz Sankowski on Pixabay. Altered. Used with permission.

There has been a recent push for new mental health strategies to prevent violence and other social problems. One method under investigation is new technological innovations such as ‘Mental Health Apps’ (MHAs), which offer new possibilities to reach patients and address risks. But what rules and strategies should come along with the advent of MHA technology?

Mental health apps have been available for some time, as mentioned in a previous article† The first generation MHAs usually provided reminders and positive messages, which could be helpful for: attentivenesssleep hygiene, life/sickness managementand skills training. Unlike human therapists, digital mental health apps are available 24/7. In addition to providing log prompts and inspirational messages, mental health apps also collect passive self-report data. User comments are stored in a database and analyzed to provide feedback.

New-generation MHAs integrate biosensors and devices such as smartwatches, phones or sensor pads to monitor fluctuations in the user’s daily signals. The latest devices record data: from physical activity to sleep data, skin resistance, temperature, blood oxygen, EKG, fall detectors and even emergency medical alerts. These body-worn devices provide automatic monitoring of readings and activity to reduce the burden on patients having to enter the data. The latest MHAs process all that bio-psychic data using algorithms to identify, and use trends AI give feedback. In the near future, they are also likely to offer preliminary diagnoses and even treatments. For example, your future MHA biosenses will have an unusually hightension read and perhaps recommend a wellness checklist or relaxation module. You will talk to your AI therapist and your device will let you know when your metabolism returns to a healthier level.

But questions remain: where will the use of mental health data go in the future? What guardrails are needed for mental health data collected by MHAs and digital devices?

Several steps can be considered:

  1. Psychologists must validate the accuracy of MHAs. Consider the consequences of misdiagnosis, false positives, or false negatives. Beta testing an app is not as thorough as conducting clinical trials.1 Physicians can work with engineers and software developers to make MHAs more accurate, safer, and more effective. The Future of Digital Therapies Requires Clinical Studies on Efficacy and Consumers education on the use and misuse of new technologies. For example, some researchers have experimented with Internet-based cognitive behavior therapy for diagnoses of depression and anxiety2 Such well-controlled research is necessary for the use of MHAs and body-worn sensor data to build acceptance and accuracy.
  2. Rules are needed for how MHA data is shared. Does user data go to digital mental health records? Will this data provide patients with better risk assessment and access to treatment? On the other hand, how or when will mental health data be used to “signal” those perceived as a risk to themselves or others? What’s the procedure to get a second opinion or question your AI-based diagnosis? How can users remove a red flag if an MHA algorithm has determined it to be appropriate? Strict user rights and privacy protections are crucial for the new frontier of digital mental health, especially if we want patients to adopt and use the new technology.3
  3. MHAs will eventually evolve towards providing treatments. In the future, a high risk score may lead to MHA recommendations to seek therapy or direct potential patients to mental health services. Soon, virtual mental health assistants could serve as confidential sounding boards, prompting users to reveal their problems, stories, and feelings. Maybe some people prefer “therapy” with an anonymous, non-judgmental robot? This will be the brave new world of computer-mediated assessment and therapy. Innovation and testing are still needed, but great potential exists for these technologies to guide services to address mental health issues.4

As MHAs become accepted, developers and clinicians will need to consider establishing rules to protect user privacy. Circumstances in which MHA data can be used ethically and legally to improve public safety must also be established. The key is to balance patient privacy rights and HIPAA compliance with the desire to identify and intervene in mental health crises.

Password: “Take a balanced approach.”

Leave a Comment

Your email address will not be published.