September 3-4, 2025

AI Meets Medicine: Collaborative Solutions for the Future of Healthcare

Building Safe, Inclusive AI Systems to Improve Healthcare Efficiency

Join us

In partnership with:

We are reimagining the traditional datathon format by shifting focus from pure model-building to real-world evaluation of AI tools

Core objectives

  • Interdisciplinary Collaboration

    Bring together healthcare professionals, technologists, social scientists, and end users to co-create AI solutions that address real-world healthcare challenges.

  • Real-World Evaluation

    Shift the focus from AI model development to assessing how these systems perform in real healthcare settings — in diagnosis, treatment planning, and doctor–patient communication.

  • User Engagement

    Include patients and frontline healthcare workers in the evaluation process to ensure that AI systems are truly usable, relevant, and responsive to real needs.

  • Ethical and Resilient AI Design

    Promote the creation of fair, safe, and adaptable AI systems that address bias, legal implications, and are built to withstand dynamic social and political environments.

Participants:

a diverse, multidisciplinary group

To ensure a broad range of perspectives

This transformative event is led by

Dr. Leo Celi, a global leader in healthcare AI and critical care data science, with support from MIT and Harvard Medical School.

Plan of the days

3-4 September 2025, Milan

  • In partnership with Elty, a tech company of the Unipol Group.

    Day 1

    The event will gather Elty engineers and healthcare professionals—mainly general practitioners and clinicians—to collaborate in interdisciplinary teams. They will co-design generative AI solutions aimed at improving clinical workflows, patient care, and healthcare equity. Unlike Day 2, Day 1’s outcomes will be directly applied to Elty’s real-world products, making it a hands-on session with immediate impact.

  • Hosted at Politecnico di Milano

    Day 2

    On Day 2, we’ll explore the ethical and theoretical dimensions of AI in healthcare, with a focus on bias, equity, and critical care algorithms. Participants—especially students and young doctors—will evaluate generative tools like ChatGPT to identify risks, inaccuracies, and fairness issues. The goal is not to develop products, but to deepen understanding and improve the evaluation of AI tools in real-world healthcare contexts.

Join us in shaping the future of responsible, inclusive AI in healthcare