Abstract

The integration of artificial intelligence (AI) into healthcare is progressively becoming pivotal, especially with its potential to enhance patient care and operational workflows. This paper navigates through the complexities and potentials of AI in healthcare, emphasising the necessity of explainability, trustworthiness, usability, transparency and fairness in developing and implementing AI models. It underscores the ‘black box’ challenge, highlighting the gap between algorithmic outputs and human interpretability, and articulates the pivotal role of explainable AI in enhancing the transparency and accountability of AI applications in healthcare. The discourse extends to ethical considerations, exploring the potential biases and ethical dilemmas that may arise in AI application, with a keen focus on ensuring equitable and ethical AI use across diverse global regions. Furthermore, the paper explores the concept of responsible AI in healthcare, advocating for a balanced approach that leverages AI’s capabilities for enhanced healthcare delivery and ensures ethical, transparent and accountable use of technology, particularly in clinical decision-making and patient care.

Original languageEnglish
Article numbere100920
JournalBMJ Health and Care Informatics
Volume30
Issue number1
DOIs
Publication statusPublished - Dec 21 2023

ASJC Scopus subject areas

  • Computer Science Applications
  • Health Informatics
  • Health Information Management

Fingerprint

Dive into the research topics of 'Call for the responsible artificial intelligence in the healthcare'. Together they form a unique fingerprint.

Cite this