The European Union Artificial Intelligence (AI) Act was adopted in 2024 to protect public interests from AI-related harms. We aimed to determine whether the AI Act captures the most relevant stakeholders’ values for a high-risk AI Clinical Decision Support System (CDSS) for cardiovascular disease. Ethics and design methodologies empirically identified stakeholder values, which were translated into the trustworthy AI requirements of the High-Level Expert Group on AI and contrasted with the AI Act. These requirements do not capture all relevant stakeholder values. We provide three specific avenues to complement these requirements. The AI Act supports mainly the requirements of human agency and oversight, transparency, diversity, non-discrimination and fairness; but only partially, limited by its product safety nature. To address this gap, contractual strategies could provide a more legally robust ground for GPs to align the AI-CDSS with stakeholders’ values, comply with their duty of care, and promote trustworthy AI-CDSSs.