UMC Utrecht – Value Alignment in Medical AI (VAMAI)

A patient-driven ELSA lab for responsible medical AI

Artificial Intelligence (AI) is rapidly transforming healthcare, offering major opportunities to improve diagnostics, decision-making, and patient care. At the same time, the use of AI in medical practice raises various ethical, legal, and societal questions: how do we safeguard patient rights, build appropriate trust, and ensure that AI supports good care in ways that align with human values? The Value Alignment in Medical AI (VAMAI) project, led by dr. Karin Jongsma of UMC Utrecht, addresses these questions by focusing on the ethical, legal and societal aspects (ELSA) of medical AI across different stages of design, implementation, and deployment in clinical practice. With support from Hezelburcht, the VAMAI consortium successfully secured funding under the NGF AiNed ELSA Labs programme from NWO.

Aligning AI with values in healthcare

Healthcare is one of the largest and fastest-growing application domains for AI, yet many AI solutions struggle to move from development to routine clinical use. While European regulations such as the EU AI Act aim to safeguard fundamental rights and ensure safety, they offer limited guidance on how ethical and legal principles should be applied in everyday medical practice. As a result, healthcare professionals, patients, and developers face uncertainty about how AI can be used responsibly and meaningfully in clinical settings. VAMAI tackles this problem by focusing on value alignment: rather than treating AI solely as a technical system, the project examines how AI can best interact with patients, healthcare professionals, and existing legal frameworks.

A patient-driven ELSA Lab built around real use cases

VAMAI is set up as a patient-driven ELSA Lab, uniting an interdisciplinary consortium of researchers in ethics, law, social sciences, and AI, alongside healthcare professionals, patient representatives, and industry partners. Using a multi-case study approach, VAMAI studies concrete medical AI applications, generating practical insights into trust, explainability, fairness, accountability and patient involvement. These insights are translated into guidance and tools that can be used by developers, healthcare organisations, and policy makers to support the responsible design and implementation of medical AI.

Societal and economic impact

The ambition of VAMAI extends beyond academic research. By embedding ethical, legal, and societal considerations early in the development and implementation of AI, the project contributes to medical AI systems that are more trustworthy, acceptable and ready for use in clinical practice. This supports better patient outcomes, strengthens confidence among healthcare professionals, and helps organisations navigate existing and emerging regulations. In doing so, VAMAI contributes to sustainable innovation in healthcare and strengthens the Netherlands’ position within the European AI ecosystem.

The role of Hezelburcht

Hezelburcht supported the VAMAI consortium throughout the grant application process. This included strategic advice on positioning the project within the call framework, structuring the interdisciplinary consortium and impact pathway, and translating the project’s ambitions into a clear and competitive proposal. We are happy for the VAMAI consortium that they can execute their project and wish them the best of luck in aligning value in medical AI solutions.