Skip to Content
Artificial intelligence

Nvidia’s Deep-Learning Chips May Give Medicine a Shot in the Arm

The company sees medicine as the next big market for its machine-learning hardware.
March 28, 2017

The chip maker Nvidia is riding the current artificial-intelligence boom with hardware designed to power cutting-edge learning algorithms. And the company sees health care and medicine as the next big market for its technology.

Kimberly Powell, who leads Nvidia’s efforts in health care, says the company is working with medical researchers in a range of areas and will look to expand these efforts in coming years.

“There’s this amazing surge in medical imaging research,” Powell said at MIT Technology Review’s EmTech Digital conference in San Francisco on Monday. “More and more we’re visiting providers at hospitals today, and they’re imagining new artificial-intelligence applications.”

Most notably, a machine-learning technique called deep learning is being applied to processing medical images and sifting through large amounts of medical data. Deep learning, which is very loosely inspired by the way neurons in the brain seem to work, has already proved incredibly useful for finding images and processing audio files (see “10 Breakthrough Technologies: Deep Learning”).

This AI technique certainly seems to be gaining acolytes in medical research. Last year a team from Google showed that deep learning can be used to automate the diagnosis of eye disease. Meanwhile, a group from Stanford University published a paper in the journal Nature that showed the technique can spot skin cancer as well as a trained dermatologist. A group from Mount Sinai Hospital in New York used the approach to analyze patients’ electronic health records and predict, with surprisingly high accuracy, what disease a person would go on to develop.

These are just a few high-profile examples. Powell noted during her talk that large medical-imaging conferences have become dominated by deep-learning papers.

The graphics processors made by Nvidia are very well suited to performing the parallel calculations required for deep learning, and the chip maker has already built a sizable business  supplying hardware to deep-learning researchers in academia and industry. Nvidia makes a growing number of specialized deep-learning products, including a powerful research computer called the DGX-1 and a system for self-driving vehicles called the Drive PX.

Powell believes the company’s hardware will increasingly be found in hospitals and medical research centers, too. The approach could help improve the reliability of diagnosis, she said, and might significantly boost standards of care in developing countries, where expertise is scarce. Powell added that drug discovery would likely be another big area for deep learning in the future.

But deep learning might also help doctors find patterns that would otherwise be invisible. Nvidia is, for example, working with Bradley Erickson, a neuro-radiologist at the Mayo Clinic, to apply deep learning to brain images. Erickson has had some success in identifying genetic factors related to brain disease from images, Powell said.

Earlier, at the same event, Gary Marcus, a professor from NYU, singled out medicine as the area in which AI could have its biggest impact. “Think about cancer,” Marcus said. The risk factors that might indicate the likelihood of such a disease may be hard for a person to identify, but they could be uncovered by an algorithm, he said. “The killer app [for AI] might be major advances in how we treat medicine.”

There are, however, significant challenges in applying techniques like deep learning to medicine. The approach is so complex and opaque that it may not be clear to a doctor why an algorithm comes up with a particular diagnosis. Powell acknowledged this challenge but said that solutions, such as new ways of visualizing the behavior of deep-learning networks, were emerging. “It’s a big topic in research right now,” she said.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.