AI in Mental Health for Today and Tomorrow: the IEEE Regulating AI in Digital Mental Health Forum

Array

Leading experts in AI and mental health came together in London, UK, for the first in the series of IEEE Regulating AI in Digital Mental Health Forums earlier this year; hosted by Maria Palombini, Director of IEEE SA Healthcare & Life Sciences Global Practice and Dr. Becky Inkster, vice-chair, IEEE SA Ethical Assurance of Data-Driven Technologies for Mental Healthcare Industry Connections Program.

As the rate of mental health disorders is expected to rise past one billion global diagnosed patients, the need for care is growing in a time when there are fewer trained human caregivers to provide it. The use of digital mental health tools is the next best step, but critical ethical, clinical, and technical considerations must be part of the conversation. Speakers at this Forum touched on these critical considerations and the need for more robust regulatory oversight in the use of AI embedded in digital mental health therapeutics.

Access, Trust, Transparency

In the keynote presentation, Liz Ashall-Payne, founding CEO of the Organisation for the Review of Care and Health Apps (ORCHA), stated 6.6 billion people own smartphones and five million people a day download a basic health app. However, mental health needs are growing rapidly, which require special care. Can AI integration with digital mental health apps be a viable solution?

Ashall-Payne sees user/physician lack of trust, lack of access, conflicting media coverage, and a shortage of reliable research indicating which AI technologies are safe to ensure trust as problems. She said research shows only 20% of these technologies are safe. Also, these AI paradigms currently have little in the way of regulation and standards and there’s no correlation between app store ratings and app quality.

Digital healthcare products and regulations are continuously evolving. This can help quickly adapt to users’ needs, but makes evaluation complex and time consuming. There’s some nuance necessary here, as not all health-tech products are the same.

Several possible solutions exist. Different countries assess AI and digital mental health differently, which creates opportunities to share insights globally. Other solutions include offering patient-facing support with diverse solutions for different issues and needs, creating a prescription infrastructure, and evolving regulations and standards to ensure safety, trust, access, and transparency.

Legal Perspectives

Providing a European Union (EU) perspective, Dr. Elizabeth Steindel, a legal scholar working in the Department of Innovation and Digitalization at the University of Vienna, discussed the legal landscape shaping the EU’s environment for digital mental health. In addition to existing laws, she focused on two new pieces of legislation: the Digital Services Act (DSA) and the AI Act (AIA).

Addressing EU social media services and platforms/apps including mental health apps, the DSA aims to mitigate the adverse effects of media misrepresentations of digital health technologies including AI. It also will analyze and mitigate systemic risks such as illegal content, threats to fundamental rights and privacy, as well as electoral processes, gender-based violence, protection of minors, and both mental and physical health.

Not yet in effect, the AIA employs a four-level risk-taking approach: unacceptable, high risk, limited, and low risk. For example, it places mental health chatbots into the limited risk category to ensure users are aware they are interacting with an AI system and not a human. It defines a mental health medical device as a high risk AI system.

Dr. Steindel feels that new laws must focus on new priorities, such as fundamental human rights impact assessments, trustworthy and safe applications via enhanced transparency oversight, and systemic risk analyses. In brief, we need to perpetually revise existing legal frameworks.

Canadian Perspective

Maureen Abbot from the Mental Health Commission of Canada discussed a unique mandate from the Canadian federal government that supports federal and provincial organizations as well as individual entities for developing best policy and practice with mental-health resources. The first goal is to provide flexible care for whoever needs it; however, funding for mental health in Canada is low. Essentially, it’s up to the individual to find support.

One solution is the Mental Health Commission of Canada app assessment framework. Developed over three years in collaboration with ORCHA and working with healthcare practitioners and patients, the framework stresses cultural safety, social responsibility, and equity for indigenous people’s rights and inclusion. Other proposed measures include the use of digital twins to assess suicide risk in hospital patients and the application of digital compassion throughout each step of the development of AI in digital mental health apps.

Insights From Australia

Offering an Australian perspective, Chris Boyd-Skinner is an assistant director of Clinical Governance at the Australian Digital Health Agency and a consultant for the World Health Organization (WHO) Office on Quality of Care and Patient Safety. He explained the concept of voluntary ethics in Australia’s AI framework in the context of hard and soft law regulation with respect to AI.

Boyd-Skinner also points out there’s no AI-specific legislation, standards, or framework for AI in mental health services in Australia. He urges there’s a critical need to safeguard emerging technologies as well as clinical and technical governance for AI mental health tech.

Academia

Dr. Julia Ives, a professor at Queen Mary University in London, discussed the use of AI in mental health in terms of patient monitoring and screening, treatment of mental health conditions, and helping healthcare professionals with routine work. She mentioned how critical it is to maintain data privacy, minimize AI system biases, and advance transparency between human-AI collaborations.

She went on to say that people are more likely to overshare with chatbots. To counter this, patient education is essential. She also mentioned the issue of low representation of minority groups in mental health data.

Dr. Ives believes AI in mental health will benefit from more and better standards and benchmarks regarding textual privacy and data representativeness, especially racial, social, and practitioners’ bias. Advancing efficient human-AI collaboration requires transparency standards relating to AI behavior to clinical expertise, and standards to address uncertainty and bias-aware AI models to better inform experts on relevant measures.

Conclusion

With the prevalence of mental health disorders escalating globally, AI shows great potential for digital mental health therapeutics. In order for AI digital mental health applications to be safe, effective, and ethical, the industry must address critical issues. These include:

  • Patient education to know the difference between AI and human intervention
  • Healthcare professional competency in the proper use of AI-based interventions
  • Better and more accurate evaluation of AI health products
  • Better communication between humans and AI systems that take into account speech patterns common to a diversified set of ethnic and racial groups
  • Minimizing practitioner and AI-system biases
  • Security and privacy protection of the patient
  • Complete transparency
  • The creation of mandatory (not voluntary) standards, mandates, and regulations that evolve as digital mental health products evolve

The next edition of the IEEE Regulating AI in Digital Mental Health Forum will occur on 2 October in Singapore. Following the successful format and outcomes of the London event, this next Forum will bring regional experts to address the cultural, regulatory and socio-economic challenges that are specific to the highly diversified patient population in Asia.

To learn more about IEEE SA’s efforts in the digital mental health space, please visit the IEEE SA Healthcare and Life Sciences page.

Share this Article