In this blog post, there are important questions that need to be answered to clarity the position of artifical intelligence in healthcare.
For an artificial intelligence (AI) solution to be successful, a large amount of patient data is needed to optimize the performance of the algorithms. Accessing these datasets in healthcare comes with various challenges:
Patient Privacy and Data Ownership Ethics
Access to personal medical records must be strictly protected. In recent years, data sharing between hospitals and AI companies has raised several ethical concerns, leading to questions such as:
- Who owns and controls the patient data required to develop a new AI solution?
- Should hospitals be allowed to continue providing (or selling) large volumes of anonymized patient data to third-party AI companies?
- How can patients' privacy rights be protected?
- What would the consequences be in the event of a security breach?
- What impact will new regulations, which grant individuals the right to delete their personal data in certain circumstances and impose millions of dollars in penalties for non-compliance, have?
Data Quality and Usability
In healthcare, data can be subjective and often prone to errors. Therefore, the accuracy of collected data may not be as precise as in other fields. For example, in databases where information such as patient identity, medications, disease names, age, height, weight, and gender are entered, health personnel may fail to enter the correct values for seemingly simple details like height and weight, which can have significant implications in research. An AI algorithm investigating the correlation between height, weight, and a disease would produce unreliable results if the dataset is incorrect. In this context, it is crucial that doctors and healthcare personnel input patient records accurately into systems like e-Nabız, as the future usability of the data depends on its precision.
In addition to these concerns, the following questions must also be clearly addressed for AI solutions operating in healthcare systems:
- Developing regulations for cloud-based and continuously evolving technologies that the algorithms work with presents obvious challenges. How can patients be protected?
- How can adequate regulatory oversight be ensured for a continuously learning and evolving solution, as opposed to a version-controlled medical device?
- For AI solutions like chat-based primary care tools that involve direct patient interactions without clinician oversight, the question arises: is the technology merely a device or does it function as a 'medical practitioner'? Will such applications require a medical license, and would a national health board accept the issuance of such a license?
- When it comes to healthcare, responsibility should be clear if something goes wrong. If diagnosis or treatment is controlled by these technologies, would the AI company take responsibility for the patient's health? Similarly, can insurance companies trust the diagnosis and treatment provided by an AI tool?
User Adoption and Acceptance
User adoption is another barrier to implementation. The human touch in interactions with a doctor may be lost with such tools.
- Will patients be willing to trust a diagnosis from a software algorithm instead of a human? Will clinicians be open to adopting these new solutions?
The above questions are crucial in determining the role of AI in healthcare and must be addressed.
With factors such as the aging population and increasing rates of chronic diseases, the need for innovative solutions in healthcare is clear. While AI’s transformative power is felt across many industries, its potential impact in healthcare will directly affect lives. Today, AI-powered solutions have taken steps to address significant issues, but their global impact on the healthcare sector will become clearer in the future. If several key challenges can be solved in the coming years, AI could play a leading role in increasing clinical resources and ensuring optimal patient outcomes.
Hakan Kahraman - Software Analyst