Resources

Resources

nextgen-advisors

Hear from NextGen's expert advisors with insights and advice on the evolving COVID-19 pandemic and beyond.

Read Blog
covid-resources

The financial and clinical challenges you face now are evolving rapidly. Here are resources, solutions, and ideas we think will help.

Read Now
Podcast Library > NextGen Advisors Podcasts > How Population Health Risk Algorithms Introduce Racial Bias into Every Day Practice

November 6, 2020

How Population Health Risk Algorithms Introduce Racial Bias into Every Day Practice

In this podcast, the Advisors discuss a recent Science magazine article which exposed how commonly used population health algorithms introduce racial bias into many clinical decisions. They review the underlying reasons for these algorithm shortcomings and discuss the ways in which these problems can be mitigated.

Play

Transcript

Graham Brown:

Hello, this is Graham Brown, Senior Vice President and Principal with NextGen Advisors. Welcome to our podcast series. I'm joined today by my colleagues, Dr. Marty Lustick, and Dr. Betty Rabinowitz. Welcome Marty and Betty.

Dr. Martin Lustick:

Thanks, Graham. Great to be here.

Dr. Betty Rabinowitz:

Hey, Graham. It's good to be here.

Graham Brown:

Today, we want to discuss how commonly used algorithms may introduce bias, as they rely upon cost data as a proxy for health needs. An interesting article in the October 25th issue of Science is prompting this discussion. In the article, Obermeyer et al. find evidence in their research that black patients assign the same level of risk by the algorithm are sicker than white patients. They estimated that this racial bias reduces the number of black patients identified for extra care, by more than half. At a given risk score, black patients are considerably sicker than white patients, as evidenced by signs of uncontrolled illness, so that's the background here from this article that we want to talk about. Betty, you wrote a blog about this subject, which we published recently. Help our listeners understand a bit more about this issue. If biased algorithms are being used in healthcare settings, what are the implications for how care is provided?

Dr. Betty Rabinowitz:

Thanks, Graham. This was a worrisome revelation, because I think we felt those of us who were very engaged with population health, that the ability to measure risk, supported the decisions that the practice makes on which patients deserve and need higher levels of care, including care management, and other enriched services and interventions. The truth of the matter is, we felt as long as we were risk stratifying patients, identifying the high-risk patients, that we were doing the right thing by providing these patients those services. What emerges from the article is that there are built-in deficiencies in these algorithms that expose patients, African American patients in some cases, and other minorities, to unfair treatment, and that there has to be other measures and other ways where we measure risk that mitigate these biases. For example, if cost is the only indicator that the algorithm is using, African American patients tend to avoid care, because of lack of trust. They tend to have less access to care, because of disparities in access in urban settings and underserved settings. When they get to physicians, physicians tend to have different referral patterns for these patients, and they tend to be referred less frequently for surgeries, and less frequently for specialist consultations. All of these elements have to be mitigated in some way. During the conversation today, we'll, I'm sure, talk about some ways to do that, but it was a very significant finding, and really timely that it was brought to the forefront.

Graham Brown:

If health systems and payers are relying on this data, as Betty was saying, to target patients for high-risk care management programs, they're assuming that those programs and those individuals that have historically high utilization from a cost perspective, will benefit the most. I think there's probably something missing from that approach. Help fill in the blanks of what might be missing.

Dr. Martin Lustick:

My own experience with this, it's interesting, because to me, this is part of the much broader point that it's kind of like the analogy of the person who's looking for their keys under the lamppost, because that's where the light is shining. We've had claims data available for the longest period of time, as a source of data, and there are always unintended consequences, when you try to use it for something other than what it was designed for. This is an example of trying to use claims data to support a clinical program, when claims processing and the all the rules around claims, were never created for that reason. That's part of the problem, in my own experience. When I was on the provider side earlier in my career, and we created our first database of all of our patients with diabetes, we stratified based on the severity of their illness, and all of our interventions as a provider group, were based on how sick our patients were. We didn't have claims data in that setting. Where there were, I'm sure, inherent biases in the way we did it, we didn't have the fundamental issue of claims. Now, more recently in my career, having spent time in the health plan, and seeing that this is actually part of over and over again, we use claims data to try to accomplish other things, and we almost always have this kind of unintended consequence.

Graham Brown:

Let's go into that a little bit further, because Betty's prior work helped address some of this, specifically. Betty in your blog, you explain why multiple sources of data support a better assessment of actual care needs, and, indeed, as you and your team were developing NextGen's Population Health Solution, this was accounted for. What sources does the population health platform use, and how does that mitigate against introducing bias?

Dr. Betty Rabinowitz:

Really, the answer to the question, what sources is the more the merrier for the population health tool, specifically, but generally, philosophically, that's one of the ways to mitigate this risk, because you can say that each additional source of data helps mitigate a blind spot in a single source data. In this case, we're talking about claims data, but any single source of data is fraught with some risk for bias. Creating a tapestry of data sources that has clinical information in it, has laboratory information, it has some objective clinical measures of disease control, and disease severity, having social determinants of health, HIE data, any other sources of data is useful and important. In the pop health system, the NextGen Population Health Platform, that is accommodated. The other way to mitigate this issue is not to use a single risk algorithm, and provide the users of the system with multiple algorithms that can be applied to the same cohort of patients. Take a group of 100 diabetics, provide the group with two, or three, or four risk algorithms that they can apply, and trust that algorithms may all have biases, but they're unlikely to have exactly the same biases, and that you could resolve some of those issues that way. Multi-source data, multiple algorithms is one way to resolve some of these issues, until, obviously, we can correct some of the core disparities that do exist in our healthcare system, but at least we shouldn't be adding to them with, with measurement tools.

Graham Brown:

Marty, from a health plan perspective, the claims cost experience for a white person, for a African American person might actually add up to the same dollar amount in any given year. But the clinical services that were provided to that patient in the year, might actually be quite different, different sites of care, different types of providers. How do those factors potentially introduce bias, when we're planning for what kind of care patients might need?

Dr. Martin Lustick:

Yeah, it's an interesting question, because I think what it unmasks is a lack of capability on the health plan side, that in the health plan world, for other than Medicaid patients, they often have no idea what the ethnic racial makeup of their population that they serve happens to be. If people aren't filling that out in their enrollment forms, which happens in Medicaid, but not necessarily in commercial at all, then the health plan is the blind to these issues, and blind in a way that's not healthy, obviously, for the patients, because all these biases come through in the way that claims data is used, without the health plan having actually any awareness of it.

Graham Brown:

Another question for both of you. Knowing there are healthcare disparities, how are providers going about learning about the other needs that certain patient populations may have, if it's not in the claims data, as you're saying, and go about incorporating those needs into their actual practice? Marty, do you want to start that one?

Dr. Martin Lustick:

Sure. I think this is a really important aspect of this issue. The article that you referenced at the beginning, really focuses on the data infrastructure, and how you stratify risk, and decide who you need to intervene with. If we step back and look at the issues more broadly, we know that healthcare disparities on racial and ethnic clients is a real issue across the country. On the provider side, I think there are opportunities to understand the community that you're serving, and what are the unique needs of subsets of that community, whether it's racial and ethnic, or whether there happen to be a lot of people with a large deaf population, for example, and how can you build programs that meet the unique needs of each of those subsets of the community you serve so that you can focus on closing those disparities on the operational level in the way you do your day-to-day business. I think, if you compliment that with improved abilities on the stratification, and using a better population health tool that has less bias in it, that you really can make progress in this space.

Dr. Betty Rabinowitz:

I think one of the important lessons from this is, that as we develop these pretty sophisticated population health tools, give groups the ability to identify a cohort of patients, learn about them, risk stratify them. That is wholly important, and, obviously, teaching folks of the limitations of these tools, and how to use them properly, and how to mitigate with proper use of the system. Some of the biases is important, but it is always important to remember that nothing substitutes clinical judgment and the knowledge that a good physician, a good primary care physician, a good primary care nurse, a good primary care med tech, has in knowing their population very, very well. We used to laugh, kind of kiddingly say, that in any of these decision points of getting a patient back, assigning a patient to care management, that we should add a gestalt button into the software so that the gestalt button can override any algorithm. If an algorithm comes up with, obviously, wrong answer, override it. Clinically override it. If Mr. Smith is clearly a high-risk, complex patient with incredibly complex social determinants of health issues, was recently widowed, recently lost a job. All of the things we know can completely impact a patient's ability to cope with and thrive, in the context of illness. Override the system. The pendulum has swung completely to the other direction, where we had no tools and no ability to do analytics on groups of patients with common denominators, to completely conceding clinical judgment to these tools. I think it's a good wake-up call to say, nothing at the end of the day, substitutes good clinical judgment. It helps, it supports, it's an efficiency tool. You can look at 6,000 patients, find the top 300, and then for the rest, apply clinical judgment to it.

Dr. Martin Lustick:

It's interesting that you say that Betty, because at the health plan where I was, we had a pretty sophisticated algorithm that filtered through and identified high risk of patients. But we also had a policy that, basically, any clinical person in the organization, a medical director, or a nurse, or a social worker, if they were involved with a case in any way, just reviewing a service for prior authorization, and for whatever reason, they thought the patient would benefit, they could make a referral, and those folks went right to the head of the line.

Dr. Betty Rabinowitz:

Yeah, absolutely. I think it's an important thing to remember, especially in the context of this science article, and sadly, there are communities or practices, who serve populations, that in their entirety, are high risk. How do you then take a group of 3,000 patients or 2,500 patients in a family physician's practice, risk stratify them with, where in any other context, all of those patients would be at the top of the list. These tools, at some point, cease being discriminatory, when you apply them to a population where the risk is enormous, and then, you have to start applying other forms of interventions, and decision making on who gets scarce resources. Look, if we could provide everyone with care management, it probably would be a desirable thing to do, but that's not the reality. We do have to allocate these services to folks who are most impactable. Now, impactability raises a whole other issue, which is, once you've identified these high-risk patients, how do you then decide which of those patients will respond favorably to these enriched services? My guess is, that most of the tools that are being used to identify impactability, have a fair amount of biases associated with them, as well.

Graham Brown:

Well, great. Thank you both. To our listeners, thanks for joining us today. You know where the subscribe button is. Betty Rabinowitz and Dr. Marty Lustick joined me for their conversation today, and I appreciate their perspectives. On behalf of NextGen Advisors, this is Graham brown. Thanks for listen and have a great day.