AI in Medical Devices: Addressing Security and Bias Concerns

Related:
Life Sciences and BioTech Trends Autumn 2020

Real estate developments, investment in artificial intelligence, a gene-editing setback, and success for Vancouver-based biotech companies.
Read More

One can’t speak about the immense potential benefits of AI and IoT in medical devices without giving pause to examine the threats and vulnerabilities posed by their increasing ubiquity. Two particular concerns rise to the forefront; one that sees human bad actors exploiting the machines, and another that finds the machines themselves replicating or even amplifying the worst of human failures. 

The Internet of Medical Things

According to one estimate, 50 billion medical devices will be connected to clinical systems within the next decade.1 As the IoMT (Internet of Medical Things) grows to include more and more applications and devices, it will naturally see increased exposure to risk. Bad actors mirror the work ethic of life scientists and R&D researchers, making cybersecurity a key battleground.  

Security Concerns of Medical Devices

In 2019 a study from Ben-Gurion University showed how hackers could potentially manipulate CT and MRI results of lung cancer patients, altering key data about their tumours.

According to Med City News “Both radiologists and AI algorithms were unable to differentiate between the altered and correct scans. This kind of tampering has the potential to impact patient lives, and can also result in insurance fraud, ransomware attacks and other issues for both patients and providers.”1

Another study conducted by McAfee found that infusion pumps were also vulnerable to manipulation. Researchers warn that hackers could ‘deliver double doses of medications to patients without detection.’2

It’s not difficult to see the potentially disastrous results of a breach and the vulnerabilities are real. 

“Bad actors often need little more than an emulator — which enables one computer system to behave like another – and a piece of code from the system being targeted in order to successfully program AI to hack a device,” reports Med City News.1 

Protecting Medical Devices from Hacking

Protections against this include access control layers, anomaly detection, and devices built to combat reverse engineering. It’s easier to implement these safeguards during product development than it is to retrofit them into existing devices, making foresight and vigilance key components of the battle. 

With this in mind the University of Minnesota has recently announced a new Center for Medical Device Cybersecurity. The initiative finds the University collaborating with industry partners to form a hub for ‘workforce training, outreach, and discovery to bolster the emerging field.’2

“While manufacturers can ensure a high-level of safety through testing, the security of connected-devices remains a growing and moving target, making this collaboration and the work of the CMDC critical to the industry and all those it serves,” says Technology Leadership Institute director Allison Hubel.2 

Potential Bias of AI in Healthcare

The healthcare system is man made and, as such, is subject to bias, both conscious and subconscious. As more and more decisions are made on the basis of data, AI, and machine learning, the hope is that these biases will be eliminated. Without careful planning and oversight, however, we may find machines simply repeating our human mistakes. 

Consider the real-world example of an algorithm that was designed to produce scores in order to prioritize patients with the greatest level of healthcare needs.3 It was found that amongst those with equal scores, Black patients were, in fact, sicker than White patients. It was determined that the algorithm had used ‘health costs’ as a proxy for ‘health needs’. Since White patients have traditionally received the highest level of care, their health costs, and thus their scores, were inflated. Rather than eliminate human bias, the algorithm had merely echoed it. 

More troubling still is the as-yet-unexplained ability of an AI system to correctly guess a patient’s race based on X-rays and CT scans. Both doctors and computer scientists are at a loss to explain the algorithm’s ability to do so, rendering them unable to address the issue. 

“That means that we would not be able to mitigate the bias,” Dr. Judy Gichoya, a co-author of the study and radiologist at Emory University, told Motherboard. “Our main message is that it’s not just the ability to identify self-reported race, it’s the ability of AI to identify self-reported race from very, very trivial features. If these models are starting to learn these properties, then whatever we do in terms of systemic racism … will naturally populate to the algorithm.” 

Eliminating Bias in Healthcare

Research and collaboration will be key to preventing bias as AI continues to play a greater role in decision making. 

“AI is only as good as the data you train it on,” says Cognoa chief medical officer Dr. Sharief Taraman.5 Taraman believes that development should involve collaboration between data scientists, clinicians, and other stakeholder groups such as patient advocacy groups. 

“If we’re very intentional about making sure we include all of those folks, we do it in a way that actually removes the biases and gets rid of them,” he says.5

AI is being adopted by nearly every industry; medicine is no exception. While headlines will often center around its ever-increasing capabilities, those who work to mitigate risk and bias should not be overlooked, as they play a key role in maximizing the benefits we are able to receive from the technology. 

Cited Sources
1 Huin, Steeve. “The Future of Healthcare Is Dependent on Securing Ai-Powered Medical Devices.” MedCity News, September 8, 2021. https://medcitynews.com/2021/09/the-future-of-healthcare-is-dependent-on-securing-ai-powered-medical-devices/
2 McKeon, Jill. “University of MINNESOTA UNVEILS Center for Medical DEVICE CYBERSECURITY.” HealthITSecurity. HealthITSecurity, September 13, 2021. https://healthitsecurity.com/news/university-of-minnesota-unveils-center-for-medical-device-cybersecurity. 
3  Ziad Obermeyer, Brian Powers, Christine Vogeli, Sendhil Mullainathan, University of California Ziad Obermeyer, School of Public Health, Brigham and Women’s Hospital, Brian Powers, Department of Medicine, Massachusetts General Hospital, Christine Vogeli, Mongan Institute Health Policy Center, and Booth School of Business, Sendhil Mullainathan. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science, October 25, 2019. https://www.science.org/doi/abs/10.1126/science.aax2342
4 “Ai Can Guess Your Race Based on x-Rays, and Researchers Don’t Know How.” VICE. Accessed September 15, 2021. https://www.vice.com/en/article/wx5ypb/ai-can-guess-your-race-based-on-x-rays-and-researchers-dont-know-how
5 Kent, Chloe. “Sharief Taraman q&a: Using AI to Fight Disparities in Medicine – VERDICT Medical Devices.” Medical device Network, August 18, 2021. https://www.medicaldevice-network.com/features/sharief-taraman-qa-using-ai-to-fight-disparities-in-medicine/
Author Profile Picture
Bio

Jessica Miles

Jessica Miles is one of our very successful Senior Recruiters here at Goldbeck. She specializes in recruiting professionals for sales, operations, and senior management in Production & Operations, Mining, Oil, & Gas, Forestry & Agriculture, Industrial Sales, Life Sciences & Biotech. Jessica has successfully filled very difficult searches for demanding clients across Canada, the USA, Europe and Asia. Jessica takes great pride in the time and effort she takes to understand her client’s requirements. She then methodically and thoroughly scours all passive and active candidates in the relevant labour markets. Like all of our recruiters at Goldbeck Recruiting, Jessica uses the most up-to-date digital candidate sourcing tools and methodology. This is combined with candidate evaluation techniques using precise matrix systems and thorough face-to-face interviews.

Senior Recruiter at Goldbeck Recruiting Inc.