AI can be a handy tool for overcoming several healthcare challenges, such as – diagnosing different types of complex diseases, speeding up drug discovery, and providing precision care. While AI has limitless potential to gather new insights from medical data and use it for the betterment of healthcare, data management remains a big fundamental challenge to solve. The healthcare industry can’t move towards further AI deployment if they haven’t figured out how they are going to manage the data. Below we will discuss the most pervasive data challenges of AI deployment in healthcare.
AI has significant potential to improve patient outcomes, gain valuable insights, predict outbreaks of epidemics, avoid preventable diseases, improve the quality of life in general, and reduce the cost of healthcare delivery. However, preserving data security and patients’ privacy is a complicated task. AI utilizes vast volumes of patient data and analyzes it to automatically learn from features and patterns in it. In simple words, AI can be trained to do remarkable things using medical data. For example, K Health leverages data from EHRs to build patient profiles and personalize chatbots’ responses.
But when data privacy is involved, using it to train AI becomes a complex task.
The datasets used to train AI come from a variety of sources, and in most cases, patients are not even aware of how their data is being used. The vague language used in HIPAA allows healthcare providers and businesses to use patient data for healthcare functions and share it with relevant companies without first asking patients.
A real-world example of this is Google’s partnership with Mayo Clinic, in which it grants limited access to its de-identified data, which it uses to train algorithms.
Google has also faced flak from regulators for its health data practices in the past.
Startups are also using data from different sources to train their AI systems, but when it comes to disclosing the sources, they are pretty hesitant due to competitive reasons.
Privacy is a very sensitive issue. Therefore, experts need to figure out ways to establish privacy while developing more advanced AI solutions to redefine healthcare delivery.
To ensure the successful deployment of artificial intelligence in healthcare, experts will have to develop industry standards. This will help eliminate errors and biases in healthcare AI tools and improve the implementation of the technology. Several challenges are hindering AI use in healthcare, including issues with integration and scaling in healthcare organizations.
Defining the standards of data sharing and collection can help address the issues. Committees should be created to develop and periodically update standards and best practices.
Data Errors and biases
Errors and biases are emergent symptoms of the lack of standardization.
A report by Pennsylvania Patient Safety Authority in Harrisburg discovered that from 2016-2017, EHR systems caused 775 issues during laboratory testing. Another report found that clinicians are overwhelmed with alerts, and some alerts are missed as a result. Mistakes and missed alerts often lead to health data bias. As a result, the data used to train AI algorithms for diagnosing diseases may also cause inequalities.
For eye disease diagnosing algorithms, almost all eye disease datasets come from patients in Europe, China, and North America, meaning these algorithms may not work well for patients in underrepresented countries.
Similarly, skin cancer-detection algorithms may not work on Black patients as the AI training data comes from light-skinned patients.
Even if the errors and biases are eliminated, there is a growing risk of cyber intrusion.
For example, a widespread outage was caused by cybercriminals in a nationwide network of hospitals, resulting in patients being diverted to other hospitals, delayed lab results, and a fallback to pen and paper.
A survey found that more than 37% of health care organizations have experienced a phishing incident, and more than 32% had experienced a ransomware attack when covid-19 was at its peak.
These instances highlight the urgent need to have strict data security measures in place.
A combination of approaches, techniques and novel paradigms will be needed to tackle these issues. For example, securing data would require robust encryption technologies and policy and identity management. For standardization, tools for the consolidation of disparate records are needed.
To address privacy issues, transparency is critical. Also, de-identification of data is a tremendous privacy-preserving method.
Biases and errors are more complex problems to solve, but clear disclaimers about the dataset collection process may improve assessments for clinical use.
AI can open the healthcare sector to a world of new possibilities, but the data challenges cited above serve as roadblocks in the way of AI in healthcare. We may not have all the answers yet, but once the solutions are figured out, some of healthcare’s biggest problems will be solved.