Artificial intelligence (AI) has led to a surge in the development of diagnostic and prognostic models in healthcare. Some of these AI models have demonstrated remarkable performance, rivalling that of physicians, particularly in the context of diagnostic imaging. However, concerns persist regarding the validity and transparency of these models. Rigorous validation is essential to ensure that AI-based prognosis and diagnosis can be used safely and accurately in clinical practice. Transparency is crucial to gain trust in these algorithms and facilitate accountability. To address this, we invite contributions to our Collection focused on the validation and transparency of AI-based diagnosis and prognosis.
Authors can submit manuscripts as Research articles, Methodology papers, Reviews, Protocols, and Commentaries. This Collection may include the following topics, although not limited to:
- Methodological research investigating novel ways to rigorously validate AI prognosis and diagnosis, including methods pertaining to Large Language Models.
- Methodological research on state-of-the-art methods to enhance the transparency of AI-based diagnosis and prognosis.
- Applied research on AI-based diagnosis or prognosis with a focus on rigorous validation or explainable AI methods.
- Applied research assessing the impact of AI-based diagnosis and prognosis across diverse patient populations to address fairness concerns.
- Impact studies assessing the added value of AI-based diagnosis or prognosis for decision-making.