How AI is revolutionizing medical science

Walk into Patrick Pilarski’s lab and you immediately notice the robot arms and hands that lay casually on tabletops. You hear the soft whir of small motors as the fingers on these models curl, wave or extend. The motions appear fluid, and Pilarski’s team wants to make them as lifelike as possible.

This is the Bionic Limbs for Improved Natural Control (BLINC) Lab at the University of Alberta.

It’s a place where Pilarski wants to advance the technology in prosthetic limbs to the point where users “actually, really, truly own their prosthetic limbs, to feel like that limb is part of themselves.”

The tool he’s using to achieve that goal is artificial intelligence.

Can you “teach” an artificial hand to automatically open as you move your arm toward a table where a mug of tea is sitting? By using a branch of AI called reinforcement learning, Pilarski believes the answer is yes.

“There’s a lot of things we hope our bionic parts would do—and it’s really, really hard for a person to control the system to do it. By allowing the machine to learn what the person wants, when they want it and when they need it, it can fill in the gaps,” he said, adding that the machine intelligence can be the “glue that connects the person with the machine.”

The work happening in the BLINC Lab is only one part of a broader, number-crunching approach to health research happening at the university.

It’s easy to associate AI with the technologies that most obviously affect our lives—such as the personal assistant on our smartphone or the shopping suggestions that fill our screens whenever we’re on the internet—but the discipline has wider implications.

AI allows computers to crunch through tens of millions of data points to find patterns. A person could never discern these patterns by simply studying images or numbers—they’re buried too deep in the numbers to be evident to the naked eye. But with the right computer programs, some of Pilarski’s colleagues have, for example, developed a diagnostic tool for schizophrenia.

And for Pilarski, artificial intelligence is helping to create “smart” bionic limbs that—as impossible as it sounds—have intuition that allows them to operate as seamless extensions of a person’s body, able to shift and adapt to constant movements in real time.

Making artificial limbs feel real

In his office, Pilarski usually has a cup of tea at his desk. If he’s working at his computer, at some point or another, he’ll likely extend his arm toward the cup, clasp it with his hand and bend his elbow to bring the cup to his lips.

In Pilarski’s research, a computer can be trained to recognize this repeated pattern of movement and ultimately predict when it will occur. Applying that technology to a robotic prosthesis will be a game-changing development for people with artificial limbs.

Pilarski said some top-of-the-line robotic prosthetics are amazing pieces of engineering with advanced capabilities—but they’re not at all easy to use.

People with prosthetic limbs often have to “cycle” through several joints before they can select the one they want to use, be it their wrist or elbow or hand joints. It can be time-consuming and can feel unnatural. Pilarski wants to change that.

“If every time they use their elbow, they always go on to use their hand, why make them select the wrist, then the wrist rotation? Why not let them go straight to their hand?” he explained.

In their research, Pilarski and a team used machine learning to build up the intuition or the “predictive ability” of the prosthetic arm—and ultimately of a person’s desired movements. It’s an AI approach called reinforcement learning, and the U of A is widely recognized as a leader in the field.

“We showed you can reduce the amount of time and the number of switches a person needs to make to complete reaching and grasping tasks,” he said. “Because there are patterns in our motions and patterns in our daily lives, and a system (can) make forecasts about those patterns.

“One observation was that, if you let the machine watch them use it, it builds up predictions based on situations about what joint they’re going to use next.”

The technology is called adaptive switching, and lab tests with amputees using these smart prosthetics have already shown great promise. Pilarski calls a paper he and other scientists published—including his student Ann Edwards, who was the lead author—one of the first “big wins” for the team.

It will take time before the ideas are adapted in a clinical environment, but Pilarski said other rehabilitation centres are starting to think more about an AI-based approach to prosthetics. And he’s excited about it.

He hopes to open-source code for the adaptive switching technology, giving designers and developers with prosthetic companies free access to integrate it into artificial limbs.

“Often what we try to do with our technology is restore an ability that is lost. But when I think of the big potential for our technology, it would be of people working together with it to do things they could never do before. That would be a grand slam.”

Predicting schizophrenia

Across campus, two of Pilarski’s AI research colleagues are using the intense data-crunching capabilities that are part of the field to predict major mental illness.

Russ Greiner and Mina Gheiratmand wanted to see whether there was a different way to diagnose a person for schizophrenia. Psychiatrists with years of clinical experience and medical education are trained to make these diagnoses based on behavioural symptoms, but Greiner and Gheiratmand wanted to see whether brain scans could provide clues about a person’s mental health—and the possible treatments they might need.

In collaboration with IBM, the researchers built an algorithm that learned a model for analyzing functional magnetic resonance imaging (fMRI) scans of the brain to determine whether an individual has schizophrenia. The “supervised learning” program is given a labelled dataset of scans from earlier subjects, each of whom has been identified either as having schizophrenia or not. It finds patterns in this information, which distinguishes brains that are affected by schizophrenia from those that are not.

Gheiratmand described the data elicited from the fMRI scans as “high-dimensional.”

The scans produce values—such as the blood oxygen level, for example—at about 27,000 locations in the brain, explained Greiner. And each value is taken at 137 time points during the scan, resulting in about 3.7 million bits of data.

“We had 95 individuals. If you only had to look at five characteristics, you might be able to see what’s going on. But instead of five features, imagine it’s 5,000 features—already you can’t visualize or see it. But instead of 5,000 features, it’s 3.7 million,” said Greiner.

“However, a computer may be able to find some particular set of patterns within the 3.7 million values that is different in patients with schizophrenia versus healthy controls; that’s how patterns can be used to diagnose schizophrenia.”

Greiner and Gheiratmand both said they don’t think the algorithm will replace diagnosis by psychiatrists. But they both think the research could assist in psychiatrists’ work. Their current learned model appeared to be fairly accurate, diagnosing cases of schizophrenia with 74 per cent accuracy.

The next step is to use the same number-crunching techniques over perhaps slightly different sets of training data to determine what treatments might work best on individual patients—or whether people will become patients at all.

“You can study youth at risk who come to a clinic, using the data from the brain scans and with models that you have trained, to predict whether an individual is probably going to develop schizophrenia in, for example, a couple of years,” said Greiner.

Though the 95 subjects in the study are not considered a terribly small sample size, Greiner wonders what kind of patterns or information could be gleaned from a sample of thousands or tens of thousands of patients.

That poses a fundamental challenge for researchers such as Greiner. AI works better to elicit patterns and nuance from bigger datasets. But the data that researchers need are protected by the privacy screens and regulations that define much of the Canadian health-care system.

Regardless, Greiner sees a paradigm shift taking place that might one day make his work easier.

“There’s a new generation that has cellphones, they have apps, they’re storing things about their bodies on their phones,” he said. “The data isn’t on a sheet of paper, it’s not at a doctor’s office … and many patients are aware that no one is trying to hurt them. Many patients want to consent.”

In Alberta right now, the Tomorrow Project is trying to get 50,000 people to open up their medical files for research such as Greiner’s.

Source: Read Full Article