How AI could make therapeutic decision-making for breast cancer more accurate, affordable

Imagine being a doctor and having a precocious resident permanently by your side, giving you brilliant insight into disease and helping you to identify the best treatment path for your patients.

A team at Salesforce Research believes this scenario is closer to reality than you might think, as a result of a series of exciting developments in AI vision technology and machine learning.

Breast cancer rates on the rise

Breast cancer affects more than two million women worldwide each year, with around one in eight women in the United States developing breast cancer over the course of their lifetime. There were also 2,550 new cases of breast cancer in men in the U.S. in 2018. Alarmingly, rates of breast cancer are increasing in nearly every region globally.

Salesforce Research collaborated with the Ellison Institute to develop ReceptorNet, a deep-learning algorithm that can determine hormone-receptor status—a crucial biomarker for clinicians when deciding on the appropriate treatment path for breast cancer patients—with excellent sensitivity and specificity numbers.

While using AI to try to improve outcomes for breast cancer patients is not new, efforts up until now—such as Google’s AI breast cancer screening tool—have largely focused on diagnosing cancer.

What makes ReceptorNet unique is that it focuses on improving the way treatment decisions are made for breast cancer patients. Specifically, ReceptorNet predicts hormone-receptor status from an inexpensive and ubiquitous tissue image. That’s in contrast to the current standard of care, which requires both a more expensive, less widely available type of tissue image—and a trained pathologist to review those images.

Crucially, because it is a less expensive and quicker way of determining hormone-receptor status than the system used commonly today in countries like the U.S., it could potentially help make high-quality decision-making for breast cancer therapies more accessible—allowing patients globally to receive the best possible treatment path, regardless of the expertise available in their healthcare system.

How the ReceptorNet project began

The development of ReceptorNet originated in conversations between Salesforce researchers and Dr. David Agus, Founding Director and CEO of the Lawrence J. Ellison Institute for Transformative Medicine of USC.

Dr. Agus is a renowned oncologist and a professor of medicine and engineering. He explains that there has long been a belief among cancer doctors that tumor cells contain crucial information about cancer that the human brain can’t quite extract.

“The human brain is very good at determining whether or not there is cancer based on looking at patterns in cells,” says Dr. Agus, “but it can’t determine the subtle differences in these patterns that correlate with the outcome of the cancer. In other words, what the molecular on/off switch is.”

This means that a patient might be diagnosed with cancer but then have to wait weeks for the results of molecular studies to determine what treatment they should receive.

“Our team has been working on using AI to understand patterns of cells and help make treatment decisions for several years. We had this notion that maybe we could find the answer to those molecular questions instantaneously with AI and machine learning,” Dr. Agus says.

That’s where the Salesforce Research team came in. “We have a world-class AI research team,” says Nikhil Naik, Lead Research Scientist at Salesforce Research and the first author on this study, adding that, “the collaboration also matched our philosophy of developing technology that doesn’t just serve the purpose of the company, but also has a positive impact on people and on the world.”

With a Ph.D. in computer vision from MIT, Naik says he realized early on that Salesforce would be in an ideal position to help.

Finding clues to cancer that the eye can’t see

The team developed an AI solution that is able to extract vital clues about breast cancer by learning to spot patterns in images of tumors, using a cheap and widely available imaging process. Naik gives a simple analogy.

Say you have an accident and you think you may have broken your arm. What if, instead of having to go to the hospital for an X-ray, you could just take a photo of your arm on your cell phone and an AI algorithm could determine whether or not you had a fracture?

“That is very similar to what we are doing. We are replacing an expensive, time-consuming process that requires specialized technology with a simpler, more widely available technology for imaging, using artificial intelligence.”[MC(2]

So how does this work in practice? Typically, when a patient is diagnosed with breast cancer, a pathologist will analyze their tumor tissue under a microscope using a process called immunohistochemistry (IHC) staining, to look for the presence of hormone receptors that allow the cancer to grow. This helps them to decide on the best course of treatment, such as hormone therapy or chemotherapy.

The problem with IHC staining is that it is expensive, time-consuming, and not readily available in many parts of the world, particularly in developing countries.

ReceptorNet has learned to determine hormone receptor status by using a much less expensive and simpler imaging process—hematoxylin and eosin (H&E) staining—that analyzes the shape, size, and structure of cells.

ReceptorNet has been trained on several thousand H&E image slides, each containing billions of pixels, from cancer patients in dozens of hospitals across the world.

“The algorithm is able to look at individual pixels and determine subtle patterns that the human eye can’t possibly perceive,” says Andre Esteva, Head of Medical AI, and co-author on the study, explaining that patterns can yield vital clues about how to treat the cancer.

Exciting times for medical AI

Since early 2019, Naik and Esteva have been leading a team at Salesforce that focuses on delivering AI applications for social good, primarily in the areas of medicine and science. Recently, the team created search engines for COVID-19 to help researchers and clinicians find information faster.

“There’s a significant benefit to doing this kind of AI research in industry, as opposed to academia. AI teams tend to flourish when provided with industrial scale compute capabilities—and industrial scale budgets—because those elements make it much easier to rapidly experiment,” says Esteva.

“I think the most impactful applications of AI will be in healthcare,” adds Research Scientist Ali Madani, who assisted on the computer vision algorithm that powers ReceptorNet.

Madani talks passionately about the transformative impact AI could have on people’s lives. “There are direct applications that could improve society as a whole,” he says. “That’s the underlying motivation which has drawn me to AI and healthcare.”

Eliminating bias, improving accessibility, transforming medicine

So, what could this mean for clinicians and patients? The ability to determine hormone-receptor status from H&E stains could make treatment less expensive and more readily available, particularly in developing countries.

It could also, says Dr. Agus, mean patients are spared an agonizing wait between diagnosis and the initiation of treatment.

Proposing a future use case of this new technology, Dr. Agus said to, “Imagine when a woman comes in for her diagnosis and we can tell her right there on the spot what her treatment should be. Or in a third-world country (where molecule tests aren’t available), imagine potentially being able to, just by scanning a slide, tell a woman that she can get a pill that could put her breast cancer under control. All of a sudden there’s a transformation in medicine.”

In order for AI in medicine to deliver its full potential, clinicians must first have confidence in its accuracy.

“An algorithm that’s only 80% accurate is not good enough for a critical application, like determining which cancer therapy to give to a patient,” acknowledges Naik.

In the test phase of the ReceptorNet project, when the algorithm was tested on images it had never seen before, it achieved 92% accuracy for hormone receptor determinations, which indicates its potential for future clinical deployment.

Numerous small, incremental changes were made to ensure the algorithm was able to deliver accurate predictions, regardless of differences in the preparation of the tissue samples it was analyzing. Crucially, the algorithm has also been able to deliver reliable performance across different demographic groups.

There have long been concerns that healthcare and evidence-based medicine can be biased against certain groups, because they are often underrepresented in the evidence base. However, during the development of ReceptorNet, researchers were able to achieve accurate results across a variety of different groups, which could be vital in order to build confidence in the performance of the AI among healthcare professionals.

Naik says, “We analyzed this by splitting the data based on things like age, race, and location, and statistically there was no difference in the performance of the algorithm.”

Importance of close collaboration

Throughout the design process, the Salesforce team worked closely with Dr. Agus, Dr. Dan Ruderman, and Dr. Michael F. Press at The Ellison Institute, to ensure they were aware of any possible biases that could exist in the data that was fed into the model. This close collaboration also helped to ensure that the objectives of the team were closely aligned with the clinical workflows and questions that clinicians, doctors, and nurses would be interested in.

Despite this, the team was aware that not every medical professional would be easily convinced that AI could be relied upon.

Naik says that the pathologists they spoke to were initially skeptical, explaining that this kind of prediction based on an H&E slide is not something pathologists could do on their own.

“However, when they saw that the algorithm was working so well, they were really impressed that it was able to make these predictions just by learning from thousands of images—and also that it was able to confirm their suspicions about what kinds of patterns could be predictive. That was very impressive and exciting for them.”

Dr. Agus agrees. “When we first started looking at how we could answer molecular questions instantaneously with AI and machine learning, we achieved some good results. But when we teamed up with Salesforce, those results went from good to great.”

Long-term and short-term implications

From a clinical perspective, this technology could eventually lead to a number of positive impacts. In a developed country such as the U.S., it could reduce the cost of care and the time it takes to initiate breast cancer treatment, because it uses much less expensive imaging technology and automated decision-making. It could also improve accuracy and deliver better outcomes for patients.

In developing countries where there is limited access to IHC staining, it could have a big impact in terms of broadening access to treatment.

The immediate effect of this work is to lay a foundation for future studies to compare the clinical workflow of a pathologist with and without this type of AI, in order to better understand its full potential.

Dr. Agus says that, “This is just the tip of the iceberg with what we’re going to be able to do with AI in cancer care. It’s just a pilot project to show what’s doable. Now, we can get deeper and deeper, and I can envisage a day not too far away, when just by looking at a slide, I can tell, with the help of AI, tell somebody ‘you’re going to get Drug X and not Drug Y because of how the cells are arranged’.”

For Esteva, one of the most exciting things about this project is that it has shown how AI can do more than just mimic the role of a doctor.

“What we’re doing here is actually training AI to do something that the physician can’t do, as an added capability to their repertoire. The AI can see patterns that are essentially invisible to the physician, and potentially critical for the patient.”

AI could ultimately also have a positive impact on the doctor-patient relationship. With access to AI-powered insights, doctors could have more informed conversations with their patients at an early stage in their treatment path, giving them a fuller, data-driven picture of what may lie ahead in terms of treatment and therapy.

Moving into the future

Esteva is keen to stress that AI will help augment the role of a doctor, not replace it.

“What gets me really excited is thinking, ‘where is this going to go in the next five or 10 years?’ Unfortunately, many of us can relate to situations where someone we love was given the wrong therapy or misdiagnosed. You end up asking yourself how their lives could have been different if a slightly better decision was made. A single moment can have a ripple effect in a patient’s life for years or decades.”

“Physicians should be able to make the best possible decisions based on all available medical knowledge. If you can build AI that can help doctors make the right decisions, by harnessing the collective intelligence of physicians and medical data, that’s incredibly powerful.”

Dr. Agus adds, “AI and machine learning are going to herald a new era, with potential to be applied to diseases beyond cancer and ultimately create better outcomes for patients. It won’t happen overnight, and it will be a slow, step-by-step process, but we’re embarking on a journey over the next decade to improve every aspect of what we do, through data. That’s really exciting.”

Source: Read Full Article