Speaker: Dr. Jamal Nabhani, United States

Important Takeaways

  1. Transparency & Accountability: Disclose AI’s role clearly, including model type, usage rationale, and contributions.

  1. Attribution & Explainability: AI’s role must be attributed properly, with efforts to explain its operations despite its “black box” nature.

  1. Bias and Flywheel Effect: AI can perpetuate data bias, creating cycles of misinformation.

  1. Risks of Open-Source AI Models: Confidentiality issues arise when patient data is fed into open-source AI tools.

  1. AI in Peer Review: Using AI in peer review threatens confidentiality and might miss scientific novelty.

  1. Plagiarism & Hallucination Risk: AI can unintentionally plagiarise or fabricate data, leading to distorted research outputs.

  1. Mitigation Strategy: Follow publishing guidelines, disclose AI use, and develop research-specific AI models.

Key Highlights

Ethical Promise and Potential of GAI:

Dr. Jamal emphasised AI’s ability to streamline processes such as article formatting and translations, enhancing global research collaboration. However, AI integration requires careful attention to patient confidentiality, data fairness, and transparency to maintain integrity in research outputs.

Ethical Concerns in Research:

AI use in research faces challenges with patient privacy and explainability, as the “black box” nature of AI makes understanding its decisions difficult. This can create ethical dilemmas, akin to prescribing a drug without knowing its mechanism. Additionally, bias in training data can skew interpretations of clinical outcomes.

Issues in Scientific Publishing:

Transparency in publishing is crucial. Researchers must specify the AI model used, the reason for using it, and the prompts or outputs involved. Although AI cannot be listed as an author, full responsibility for its outputs lies with the researchers.

Explainability and Bias in AI-Driven Research:

The complexity of AI models complicates the methods section of research publications, as the rationale behind conclusions can be opaque. Moreover, AI may reinforce outdated ideas due to its reliance on past data, challenging the generation of original insights.

Risks of Plagiarism, Hallucinations, and Distortion:

Generative AI's reliance on existing data increases the risk of plagiarism or “hallucinations,” where the model fabricates information. As models improve, distinguishing between human and AI-generated content becomes harder, heightening the potential for unintentional errors.

Confidentiality and AI in Peer Review:

Feeding proprietary research into AI for peer review introduces confidentiality risks. AI’s inability to identify the novelty or significance of breakthrough research could lead to flawed assessments, undermining scientific rigour.

Systematic Concerns and Flywheel Effect:

Dr. Jamal warned that poor-quality data fed into AI could lead to a “flywheel effect,” where bad information continuously generates inaccurate conclusions. This underscores the need for high-quality inputs to preserve the reliability of AI-enhanced research.

Mitigation Strategies:

To responsibly integrate AI, researchers must adhere to publishing guidelines, disclose AI use, and prioritise high-quality data. Developing research-specific AI models, rather than relying on open-source ones, will help protect confidentiality and ensure better standards in research output.

Dr. Jamal concluded his session by emphasising that AI cannot be considered an author, and researchers are fully accountable for how they use it in their work. Transparent disclosure of AI’s involvement is essential, including the rationale for its use. Extra caution is required when using AI for peer or grant reviews to avoid ethical breaches and confidentiality risks. The development of research- or medical-specific language models, rather than relying on open-source tools, is crucial to maintaining integrity and data security in scientific research.

Société Internationale d'Urologie Congress, 23-26 October 2024, New Delhi, India.