Ethical Assurance of Data-Driven Technologies for Mental Healthcare


Even prior to the global pandemic, the use of data-driven technologies in mental healthcare was increasing. Since then, the number of developers and organizations that are turning to data-driven technologies, such as machine learning or AI, in order to improve mental healthcare, has risen further. Examples of this include the use of novel diagnostic or assessment tools to support clinical decision-making, apps that offer forms of self-directed therapy or behavioral change techniques, and novel data scientific techniques that support clinical research.

However, at the same time, the use of such technologies also poses a series of well-known ethical, social, and legal risks for matters such as data privacy, explainability of automated decisions, and respect for mental integrity. If these technologies are to fully deliver on their promises of enabling greater access to healthcare and improving mental health outcomes, users and affected stakeholders must be able to trust the claims made about how technologies have been designed, developed, and deployed. Whether an individual can trust a claim made about some product or service depends on whether they can be provided with justifiable assurance that it meets certain well-defined requirements.

To help establish trust, developers and organizations across the private and public sector need to consider how to build standards and best practices that can help support a process of assurance. And, increasingly, the standards and best practices that are needed relate to key ethical goals and principles. For example, assurance that an automated decision-making tool meets agreed upon ethical standards of explainability or autonomy.

To support the development of these best practices and standards of assurance, this program will connect and work together with a range of stakeholders from across the globe in order to identify which issues matter most to them and understand how to co-develop a justifiable framework of assurance that helps promote trust and confidence in digital mental healthcare.

Program Workstreams

  • Virtual/F2F workshops designed to identify challenge areas and develop consensus upon ethical norms and standards for digital mental healthcare
  • Private/public forum to support co-design of frameworks for topics such as procurement, impact assessment, and other regulatory processes
  • Refining and revising a methodology of argument-based assurance that meets the needs and challenges of diverse stakeholder groups
  • Exploration of whether specific and formal standards can be developed that can provide evidential support for assurance claims

Get Involved

The program is open to all individual stakeholders with an interest in supporting the ethical development of digital mental healthcare.

This can include, but is not limited to, those who represent professional organizations, as well as individual researchers from academia, the public and third sectors, or industry.

Program Chairs

  • Dr. Christopher Burr, Alan Turing Institute, UK
  • Dr. Becky Inkster, University of Cambridge, UK

How to Participate

To join the team, please express your interest by sending an inquiry to:

Additional Contact



Sign up for our monthly newsletter to learn about new developments, including resources, insights and more.