Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Technology

Will A.I. Make Medicine More Human?

Cardiologist Eric Topol explores how machine learning could help doctors reconnect with patients.

By Alex OrlandoMay 16, 2020 3:00 PM
cardiologist june 2020 qa
(Credit: Scripps Research Translational Institute)

Newsletter

Sign up for our email newsletter for the latest science news

This article appeared in the June 2020 issue as "Will A.I. Make Medicine More Human?" Subscribe to Discover magazine for more stories like these.


Today, going to see your doctor can feel a little impersonal; to many, physicians appear rushed, uncaring and aloof. According to a 2019 study in the Journal of General Internal Medicine, doctors only ask patients about their concerns around a third of the time. When they do ask, they interrupt within 11 seconds two-thirds of the time. And because physicians must now plug medical data into electronic health records, they often spend appointments tending to their computer keyboards instead of their patients. These short, awkward visits could have big consequences: A 2014 study estimates that around 12 million adults are misdiagnosed in the U.S. each year.  

But, somewhat paradoxically, cardiologist Eric Topol thinks that machines — specifically, artificial intelligence — might be able to help.

Topol, who is also founder and director of the Scripps Research Translational Institute, has long championed the marriage of medicine and technology. In recent years, he’s investigated how sensors, imaging, telemedicine and other tech could herald a new digital revolution in medicine, as well as how patients might one day both generate and own their medical data. In his latest book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Topol suggests that AI could help improve health care by giving doctors more time to connect with patients. 

Artificial intelligence has already begun to make waves in medicine. Researchers have developed algorithms to identify pneumonia from chest X-rays, assess the risk of heart disease from MRI scans and even predict which types of skin lesions may become cancerous. But Topol thinks machines could also take charge of more mundane tasks like taking notes, giving doctors more time to spend with patients. Discover recently caught up with Topol to talk about the bond between doctor and patient, AI’s potential pitfalls and other aspects of what he calls “this counterintuitive story of making the human aspects of medicine better by using machines.”

Q: After 35 years as a cardiologist, what led you to transition away from patient care as a physician to focus on research and digital medicine? 

A: Well, I haven’t fully [transitioned] because I still see patients — I was in the clinic today. I basically just devoted more effort to the research side, but I never wanted to give up the clinical, patient care part. Because that’s what it’s all about, right? All the research that I’m involved in has some type of connection with patients to try and improve medicine. 

Q: Why is that connection between doctor and patient so important?

A: In the clinic today, I was finishing up with a [medical] fellow who I’ve worked with for the last two years. One of his gifts was his ability to connect with our patients. One of the patients today was crying about him moving on and finishing his fellowship. That, to me, is the essence of medicine. People tend to think I’m very high-tech and into all sorts of gadgets, sensors and AI. But having been sick [myself], I appreciate that relationship all the more. Medicine is nothing without it.

Q: You’ve recently talked about how medicine today is characterized by a lack of human connection between doctors and patients. For example, you’ve noted that electronic health records have essentially turned doctors into data technicians. How did we get to this point?  

A: The cardinal sin was letting medicine become such a big business. The electronic health record is the single worst abject failure of modern medicine, because it was set up for business purposes — for billing — only, without any regard for what would benefit doctors, patients or any other clinicians. That’s one big part of it. 

The other is the unchecked growth of administrative personnel, with a ratio of approximately 10 to 1 compared with those who actually take care of patients. All of this was to increase productivity. Unfortunately, over the course of decades, medicine lost its way.

Q: Do patients have any power in today’s medical landscape, or does technology always work against their interests? 

A: There’s tension here, because some things are promoting [patients’] empowerment, like the ability to generate their own data. One example is an Apple Watch, where they could get their heart rhythm detected if it’s abnormal. Or, in the U.K., you can get urinary tract infections diagnosed with an AI kit. Or you can get your child’s ear infection diagnosed without a doctor, through a smartphone. And [soon it will be possible to diagnose] lesions, rashes or cancer of the skin through a picture and an algorithm. There are many different ways in which [patient] empowerment is getting off the ground. And it’s doctorless, with sensors and cameras that will lead to an algorithmic interpretation that’s accurate — without the need to connect with a doctor.  

But at the same time, we have this sparse data access and lack of control by the individual, who should be the rightful owner [of their own medical data].

Q: Let’s talk about patient data. Some types of artificial intelligence, like machine learning algorithms that interpret imaging scans, take place behind the curtain, completely invisible to patients. Should they know when — and how — their data is going to be used? 

A: AI has crept into people’s lives in so many ways — whether it’s a recommendation for a song, an Amazon purchase or a word [that] autocorrects. All of these things are happening. So this algorithmic invisibility got embedded in our lives. It’s one thing to have an autocorrect; it’s another thing to have a medical issue. I think we need to take a step back and partition the normal, daily-life things that aren’t serious matters versus the algorithms that will be part of one’s medical diagnostics and treatments.

Q: How concerned are you about racial bias in health care, including AI? For instance, a 2019 study in Science found that a widely used algorithm was racially biased. The algorithm was intended to help hospitals predict which patients might benefit from additional treatment, based on their previous “cost of care,” or their past medical expenses. But it assigned the same level of “risk” to sicker black patients as it did to healthier white patients. How can AI become biased? 

A: Algorithms don’t know about bias; it’s about the humans that are putting the data in. Here, the big mistake was that the [developers] assumed that if you had a lower cost of care in the database, that meant you were healthier. But, no, it could mean that you just don’t have access to care. As it turned out, when the [researchers] looked at the data, they realized that many of the people who had low cost of care were black people who had no or little access. It had nothing to do with the algorithm. What we have is human bias, and we then blame it on the machines.

Q: What about mental illness — can AI help there? 

A: This is one of the most exciting new directions that we have. Because mental health problems, particularly depression but all across the board, are so important. They’re also understaffed in terms of capable counselors, psychologists, psychiatrists and mental health professionals in general. So the ability to quantify, in real time, a person’s state of mind is an extraordinary new development. Whether that’s how you strike a keyboard, the intonation of your speech, your breathing or all of the other parameters that can be assessed passively, without any effort. There are many different ways to capture that data. Now, we can quantify that. We’ve never been able to do that — it was all subjective, like, “Are you feeling blue?” 

The other [development] was the realization that people are entirely comfortable talking to [AI] avatars. They don’t have to talk to a human. In fact, they’d prefer to disclose their innermost secrets to an avatar. That still, to me, is shocking, but it’s been replicated with multiple studies now. 

The field using AI in mental health care, while it’s still very underdeveloped and early, is one of the greatest opportunities going forward. Because there’s a terrible mismatch of the burden of mental health and the field’s ability to support people. I think the promise here is quite extraordinary. It’s using technology to enhance human mental health, which never tends to get the same respect as physical health.

Q: How do you reconcile your optimism with AI’s darker side, like the potential for surveillance and data hacking? 

A: Well, I’m an optimistic person; I always have been. My wife always chides me about that. … [But] I’m aware of where things can go wrong — everything from a nefarious attack on an algorithm to a plain software glitch that we’re all too familiar with. And bias making inequities worse. All sorts of disruptive, dystopian things. 

Awareness of that is one part of the story. Another, interestingly, is that AI can make things better or worse across the board. It can make inequities worse, or it can make them better; it can make bias worse or it could improve it. Any way you look, you can say that it’s a two-edged sword. It’s very powerful and it could make a lot of these things better or worse. Only time will tell, and we’re in the very early stages, for sure.

Q: Why did you become so invested in exploring how AI might bring humanity back to health care?  

A: We’re in a desperate state, and we need to acknowledge the lack of human connection and empathy [in medicine]. It’s the loss of the “care” in health care. We may be looking at our best potential solution for multiple generations to come. This is more attractive and alluring, at least from its potential, than anything I’ve seen throughout my 35 years. I’m an old dog, and I’ve seen a lot, but never anything like this.

I do think it’s going to take a lot of work and a lot of validation. When you have something as powerful as this, and if you do it right, you can get medicine back on track to the way it was 40 years ago — at that time, it was a whole different model. It was a very close, trusted relationship when you were with the doctor. And you knew that when you were sick, there was somebody there who had your back; who really cared for you, had time for you and wasn’t looking at a computer screen. We could get that back. That’s exciting.

Q: How far are we from that reality?  

A: Because of my optimism, I tend to always guess too short of a time. And then I look at my grandchildren, who are ages 5 and 2. And I’m just hoping that, realistically, by the time they get older, it’ll have restored medicine. But it’s going to take a while. It’s not going to happen in one fell swoop, either. But I’m hoping that we’ll see the beginning of that in the next five years. And billboards, instead of touting that the health system is the best in the country, will instead say, “We give our patients time. We give our doctors and nurses time with patients.” If we start seeing competitiveness among health systems for the gift of time, that will be the beginning of this “back to the future” story. 

2 Free Articles Left

Want it all? Get unlimited access when you subscribe.

Subscribe

Already a subscriber? Register or Log In

Want unlimited access?

Subscribe today and save 70%

Subscribe

Already a subscriber? Register or Log In