I have always been a proponent of the Open Notes movement and efforts to incorporate patient-generated data into clinical practice. But at AMIA’s iHealth 2017 Clinical Informatics Conference here in Philadelphia yesterday, I saw a great presentation about the challenges clinicians face trying to make sense of data submitted by patients’ wearable fitness tracker devices.
Joshua Pevnick, M.D., a hospitalist and assistant professor of medicine in the Cedars-Sinai Division of General Internal Medicine in Los Angeles, described what happened when Cedars-Sinai invited patients to upload personal fitness device data to the Cedars-Sinai Epic EHR through the patient portal.
Pevnick started by saying that the project came more from the IT side of Cedars-Sinai Health System and that what he was describing was clinicians’ response to the new data stream. “We did not realize all the challenges it would introduce,” he said.
Clearly the movement toward wearable fitness devices is growing rapidly. Pevnick noted that 1 in 5 Americans owns a wearable technology device. The Cedars-Sinai EHR is capable of ingesting data from wearables such as FitBit, Withings, Apple Health, and Google Fit. (The heart rate data he described was all coming from Apple Watches.)
In April 2015 Cedars-Sinai began allowing patients to use the portal to connect their devices. They were notified that clinicians might not have time to review the data. “It is pretty easy to do,” Pevnick said. But of the 80,000 patients who are users of the portal, only 450 signed up in the first month to share data from wearables. Apple Watch support was added in May 2015. It checks heart rate every 10 minutes and can be set to do it even more frequently. Google Fit and glucometer support was added in March 2017. Some patients were submitting their daily step count from their FitBit.
Cedars-Sinai researchers sometimes offer subjects wearables as part of research studies, and that data is going into the EHR now, too. The number of participants has grown from 450 to 2,800 today.
One question clinicians had revolved around patient demographics. Were they young and healthy or older and sicker? Having heart rate data on young and healthy people is much less valuable than having it available on older and sicker people. “As a general internist, I need the old and sick people to adopt it if it is going to be of a lot of value to me,” he said.
Of the initial 450 patients, it turned out that they were predominantly young, white and male, although they did have higher than average body mass indexes, he noted. When they studied the data, neither medical conditions nor health spending seemed to predict adoption.
With the heart rate data, one of the first thing clinicians noted was that the application they have available to view the data is not optimized for seeing so many data points. “It is good for an inpatient setting, but for thousands of data points, not so good,” Pevnick said.
Then the clinicians came to the real cultural problem. They started seeing abnormal heart rate data. “I am used to ordering a test and following up,” he said. But all of a sudden physicians had patients entering data that showed very low and very high heart rates. “It is not a test I ordered,” he said.
Is it a device error? (These are not FDA-approved devices, and could be sending bad data.) Manual data entry error? (They had one glucose reading of negative 11!) Normal variants? Or is it cardiac pathology? All of a sudden, there is a 70-year-old with a heart rate of over 210 in the EHR, and it is not a test any clinician ordered. They had to decide what to do about those cases.
It made them ask the question: whose responsibility is it to monitor this regular feed of data? The patient? The physician? “We decided to look into it further,” said Pevnick, who reiterated that the interfaces are not designed to see so many data points. “There is not a good way to review it.”
They studied the outliers in heart rate —less than 40 and greater than 200. They had the de-identified data reviewed by a group of clinical informaticists, including a cardiologist. Where concern warranted it, they did a chart review. If there was something in the chart, they contacted the cardiologist or the primary care doctor.
In six concerning cases, chart review did not suggest device error. Three cases corroborated the personal fitness device data, but had already been recognized. (One person had already died from an infection.) The fitness device data may have offered earlier detection, but did not improve these patients’ outcomes.
Pevnick noted that the use of patient-generated data in EHRs is only going to increase, and it is important for health systems to decide who is responsible for abnormal values. Medicine has always been physician-centric, but there will be more opportunity for patients to push data in, he added.
Pevnick said there has been some physician pushback against being held responsible for signals in fitness tracker data, and others are probably unaware the data is even there.
If the data is going to be used, it needs to be made easier for clinicians to access. “What processing can we automate?” he asked. “We need algorithms to screen out worrisome data and we need ways to visualize data to make it more digestible.”