Moderators/Speakers: Dr. Robert Califf (former FDA Commissioner), Dr. Joanne Foody (Johnson & Johnson), Phil Ratcliffe (GE Healthcare), and Dr. Shiv Rao (Abridge)
Balancing Innovation and Regulation in AI-Driven Cardiovascular Care
Key Highlights
Initial discussion focused on the tension between fostering rapid innovation and ensuring patient safety through regulatory oversight. Dr. Rao emphasized the need to prioritize AI applications case by case, suggesting that friction—through regulation, transparency, or responsible AI frameworks—is essential for high-impact clinical tasks. He noted that despite healthcare’s historical slowness to adopt new technologies, AI has scaled rapidly in this space due to longstanding unmet needs.
Dr. Califf described the FDA’s balancing act between innovation and public safety, stating that AI’s iterative and pervasive nature requires a non-traditional, ecosystem-wide regulatory model. He advocated for collective accountability among regulators, health systems, professional societies, and industry partners—drawing an analogy to food safety systems that rely on standardized practices rather than micromanagement.
Dr. Foody underscored the importance of trust, patient-centred design, and using AI to enable clinicians to practice at the top of their license. She highlighted AI’s potential to streamline administrative burdens like prior authorizations and regulatory documentation, while also accelerating drug development and clinical decision support.
A key theme was the evolving role of patients, who increasingly use generative AI tools to inform their care and challenge clinicians. Panellists emphasized the need for clinicians to partner with patients in navigating these tools, while also using AI to enhance care delivery.
The segment concluded with a call to action for professional societies and health systems to make evidence-based knowledge more accessible at the point of care, and a segue to GE Healthcare’s longstanding integration of AI in imaging, setting the stage for discussion on real-world clinical applications.
Operationalizing AI: From Imaging and Access to Education and Data Governance
Key Highlights
Phil Ratcliffe of GE Healthcare emphasized that while the hardware footprint of imaging devices remains consistent, the focus has shifted dramatically toward AI integration. GE has surpassed 100 FDA clearances for AI-enabled tools, and is leveraging multimodal data—across imaging modalities and patient inputs—to improve diagnostic accuracy, access, and outcomes. A notable application discussed was AI-guided echocardiography enabling non-specialists to acquire diagnostic-quality cardiac images, addressing workforce shortages and rural access.
The conversation pivoted to the role of AI in expanding access to care, with the panel exploring how untrained personnel could use AI-guided tools to obtain usable cardiac imaging remotely. They also touched on upcoming data from clinical research evaluating AI’s potential to upskill novice sonographers. These examples highlight how AI can democratize care delivery while enhancing efficiency.
Education emerged as a parallel frontier for AI innovation. Dr. Ami Bhatt and Dr. Berlacher discussed the potential of personalized, adaptive medical education platforms, analogous to Netflix's recommendation engine, that could identify knowledge gaps and serve relevant educational content. The goal is to transition from traditional continuing medical education to predictive learning models that tailor content based on usage patterns and performance.
Panellists then addressed the barriers to scaling AI innovation globally—particularly the fragmentation of data across institutions, states, and countries due to regulatory, privacy, and interoperability constraints. Dr. Joanne Foody and Dr. Califf stressed the need for harmonized data frameworks that balance access with security and integrity. Dr. Califf expressed frustration that while the technology exists to combine global clinical data, entrenched human and institutional behaviours—like data hoarding and jurisdictional silos—remain major impediments, even within the U.S. health system.
Post-deployment AI monitoring was identified as another critical gap. Dr. Califf noted that unlike traditional devices, generative AI systems require continuous performance evaluation. Metrics like model calibration, discrimination, and real-world accuracy must be monitored in near real time—an infrastructure that currently does not exist across most U.S. health systems.
The segment concluded with a forceful call for professional societies like the ACC to take a leading role in pushing for data integration and accountability. Dr. Bhatt added that advancing a model of "patient agency"—where patients are equal partners in contributing and interpreting their health data—will be essential to realizing the full potential of AI-enabled cardiovascular care.
Innovation, Access, and Ethical Challenges in AI Integration
Key Highlights
The panel raised an important ethical dilemma: what happens when AI-based decision support systems recommend actions that, while beneficial to patient outcomes, could lead to financial losses for healthcare practices or systems? This scenario would create a direct and undeniable conflict between the business interests of healthcare and the well-being of patients, making it difficult to ignore or justify inaction. The broader tension between population health goals and a profit-driven healthcare system was also highlighted—these two concepts often don't align naturally.
An example from outpatient cardiology clinics was discussed to illustrate how system-level changes can benefit both patients and providers. A clinic that used virtual scribes was able to improve patient satisfaction scores on measures like perceived time spent with the doctor, without compromising clinical efficiency. The doctors in this case resisted the pressure to increase patient churn and instead advocated for solutions that supported quality care. This example underscored that with enough willpower and vision, clinicians can push back against system-level constraints and advocate for better models of care delivery.
How can incentives in the healthcare system be realigned to support optimal patient care at the point of delivery? Despite growing awareness, the current system is still fundamentally misaligned—focused more on managing illness than promoting health. One panellist, a preventive cardiologist, emphasized that even with the promise of democratized technology, the system must undergo significant redesign to redirect resources effectively and improve access. As it stands, the system is achieving exactly what it is structured to achieve—which, unfortunately, are poor health outcomes.
AI in Clinical Operations, Training, and the Future of Cardiovascular Care
Key Highlights
Speakers discussed a near-future scenario where autonomous AI tools manage chronic conditions like hypertension at scale. For example, remote blood pressure monitoring combined with tiered analytics—color-coded red/yellow/green thresholds—could allow nurses and clinicians to efficiently triage large populations. The importance of not overwhelming providers with raw data was emphasized. Instead, systems must deliver actionable insights that highlight urgency and streamline care.
This model is already partially in use in postpartum hypertension care, but scalability will depend on collaborative design with industry partners who can provide not just hardware but intelligent data analytics. The vision includes physician managers overseeing thousands of patients with the aid of AI-augmented workflows, allowing for proactive interventions and resource optimization.
Educational implications were also explored. While AI can support clinical reasoning—such as assisting with complex congenital heart disease management—it must not replace the cognitive struggle essential to learning. Trainees must still develop deep understanding to ensure long-term knowledge retention. There is a risk that overly automated solutions may reduce cognitive engagement, undermining clinical growth.
Technologies such as autonomous ultrasound—robotic arms capturing cardiac measurements—were presented as a way to democratize diagnostics in underserved regions. However, panelists reiterated that automation must be paired with prioritization logic so that urgent cases are escalated appropriately, avoiding a deluge of unfiltered data.
The conversation shifted toward abstraction and the evolving skillsets needed in a tech-enabled workforce. Drawing on parallels from software engineering, it was suggested that as AI handles increasingly complex tasks, humans will operate at higher conceptual levels. Emotional intelligence (EQ), creativity (CQ), and clinical judgment will become even more critical than rote knowledge. Trainees of the future must therefore be prepared to think differently, integrating prompt engineering and critical analysis as core competencies.
Educational efforts like prompt generation training at the ACC were highlighted as foundational. However, rapid advancements in AI may soon render today’s methods obsolete. The panelists acknowledged this pace of change with a mix of excitement and concern, urging the community to cultivate adaptability and a comfort with discomfort.
Finally, participants expressed optimism that AI would soon become so seamlessly integrated into cardiology that it would no longer be a distinct topic of discussion. Instead, the focus would return to scientific discovery, clinical excellence, and innovation in care delivery. Cardiologists, known for early tech adoption, were positioned as natural leaders in this transformation, provided they remain actively engaged in shaping the tools and frameworks that will define the future of medicine.
Guardrails, Governance, and the Call to Action
Key Highlights:
The concluding segment of the session focused on governance, sustainability, and clinician responsibility in shaping the ethical deployment of AI in cardiovascular medicine.
A critical concern raised was transparency and control over AI’s “why”—the purpose behind its use. While AI has the power to make trade-offs explicit and reduce moral injury among clinicians, the intent behind its deployment must align with patient-centred goals. There is apprehension that AI could be driven by non-clinical motives, such as profit or efficiency alone, rather than clinical benefit. Ensuring it is "put to the right purpose" emerged as a recurring theme.
Sustainability also surfaced as a growing concern, with speakers emphasizing that not every use case justifies AI implementation. Computing power and environmental costs of large-scale AI models were noted as non-trivial. As AI adoption grows, selectivity and prioritization will be crucial—ensuring its application where it meaningfully improves care, rather than deploying it indiscriminately.
In a powerful metaphor, one panellist likened the current state of medical and research systems to "Humpty Dumpty mid-fall"—fragmented and disassembled across academia, regulation, and care delivery. Yet, they offered a hopeful counterpoint: this moment presents an opportunity to rebuild those structures with intentionality, equity, and innovation. The reassembly must prioritize evidence, trust in science, and resilience against rising populist scepticism toward expertise.
Each panellist offered a final takeaway, creating a resonant call to action. Key messages included:
-
Rebuild trust in scientific infrastructure and align incentives to serve patient outcomes, not institutional inertia.
-
Recognize AI as an embedded future, similar to the evolution of the internet—soon to be invisible yet indispensable.
-
Prioritize patients and evidence, leveraging AI to deliver the right therapy at the right time for the right individual.
-
Take agency and act: clinicians were urged to get involved, experiment, and lead change in their institutions and practices.
-
Embrace bravery and leadership: Human clinicians, equipped with both EQ and IQ, are the “apex intelligence” and must guide AI’s future trajectory.
-
Be intentional with AI use, understanding its limits, selecting high-impact applications, and ensuring it aligns with healthcare's broader sustainability goals.
ACC.25, March 29 - 31, 2025, Chicago