Neural Networks Can Manipulate Mammograms and Fool Radiologists

A cyber attacker could potentially insert a feature that looks like cancer into a scan, or remove it, researchers warn.
Image
mammogram

Image credits: Chompoo Suriyo/Shutterstock

Claire Cleveland, Contributor

(Inside Science) -- Researchers have developed a method for augmenting mammograms that could one day help radiologists evaluate medical scans and identify early warning signs of cancers that may not be easily spotted by a human, but the scientists also warn of potential misuses of the software.

The research team from the University Hospital of Zurich in Switzerland used a deep learning neural network, called CycleGAN, to manipulate mammograms and either insert or remove lesions from the images. They presented an early version of their paper summarizing the results last month at the Radiological Society of North America conference in Chicago.

Deep learning, also called machine learning, involves training an algorithm to make decisions without explicitly programming how to do so. The process is designed to be similar to how a human brain works, but instead of forging connections between neurons the machine uses vast quantities of data to build a complex mathematical model called a neural network.

Cyber criminals have already targeted the medical system in other ways, noted Anton Becker, a radiology resident at the University Hospial of Zurich in Switzerland. For example, last year hospitals all over the world, including in the U.S., were targeted by ransomware attacks, in which hackers encrypted hospital data systems and demanded money in exchange for restoring access to the files.

“They shocked me when the first ones occurred,” Becker said. “I didn’t think anybody would have that kind of criminal energy to play with people’s lives for a couple of thousand bucks.”


More about Artificial Intelligence from Inside Science
How Artificial Intelligence is Making Inroads in the Music Industry
A Computer Might One Day Beat Humans at All Their Own Games
Safer Bricklaying with Artificial Intelligence

The team trained CycleGAN using 680 mammographic images from 334 patients. Becker chose mammography scans because the number of scans per patient, usually two or four, is much less than for a CT machine, which can produce hundreds or thousands of images. The team also resized the images to a lower resolution. The relatively small number of images and lower resolution meant the team could train the system quickly, Becker said, which was important since the first stage of the research was intended to be exploratory and to help determine a direction for future research.

To test CycleGAN’s skill at producing altered images that are not distinguishable from unaltered mammograms, the lower-resolution altered images were shown to three radiologists. None could reliably distinguish between the original and manipulated images.

“I think the important takeaway from this paper is not so much that you’re able to make images that are malignant or not and you’re able to fool people, but I think the really interesting part about this paper is the small number of images that you actually need to do this,” said Jarrel Seah, a researcher at Alfred Health, a hospital in Melbourne, Australia, who was not involved in the study.

When the team increased the image size in a second experiment, it became more difficult for CycleGAN not to distort the natural patterns occurring in the fatty and dense breast tissue. This limitation for CycleGAN makes a malicious manipulation unlikely to occur today, according to the researchers, but pushing the limits of neural networks like CycleGAN allows researchers to better understand the algorithms and potential future threats of introducing deep learning into medical environments.

The use of AI in medical imaging presents many interesting research areas much broader than the issue of security, Seah pointed out. “In some ways we still don’t understand how these algorithms work, and that’s not just a philosophy book problem,” he said. “If we’re able to understand how these algorithms work then we might understand a bit more about our own data and our own imaging and perhaps understand a bit more about the disease.”
 

Author Bio & Story Archive

Claire Cleveland is a science writing intern for Inside Science News Service with degrees in journalism and biology & society from Arizona State University.