Insurers leaning on AI to identify social needs for unlikely customers

Modern Healthcare Illustration / Getty Images

Insurers are increasingly investing in predictive analytics to identify social determinants of health needs among commercial enrollees, as a growing number of payers recognize that social support must not begin and end with government customers.

Eighty-three percent of large employers named addressing the social determinants of health as essential to their strategy over the next three years, according to a Willis Towers Watson survey released in April. In response to employers’ demands, insurers are leaning on tech to inform these new benefits structures, although some question the validity of the data these companies rely on.

“I want to know whether development of the algorithm was rooted in community research and participation,” said Kimberly Seals Allers, founder of the Irth mobile app. “That is how you can create technology products that don’t perpetuate the systems of oppression that got us here.”

This month, UnitedHealth Group introduced a predictive analytics program aimed at addressing the social support needs for enrollees of some employer-sponsored benefit plans. While the Minnetonka, Minn.-based health giant has long had these programs available for its Medicaid and Medicare populations, the COVID-19 pandemic’s disproportionate impact on Black and Brown enrollees signaled to employers the importance of offering benefits around the social determinants, said Rebecca Madsen, chief consumer officer of employer and individual customers. UnitedHealthcare now offers its predictive analytics system (for a fee) to the company’s self-insured customers, which include 148 large employers.

“There’s an increasing understanding by our employer population that 80% of health is influenced by social determinants of health, not basic clinical data and information,” Madsen said. “So how our conversations with our employer base go is, ‘How can you make sure that you’re supporting my employees in a holistic, 360-degree way? Not just the clinical, but putting all of these pieces together?'”

UnitedHealthcare’s in-house AI system crunches de-identified claims data, Census information, member assessments and more to generate a personalized health risk score for each of its members. When individuals call into UnitedHealthcare’s call center, company representatives then see this score and, if appropriate, refer them to a list of 543,000 local, state and national organizations to help bridge access gaps.

“It’s better done in-person than through a depersonalized mechanism,” Madsen said, adding that, since introducing this capability, eligible individuals have accepted more than 50% of the offers for support.

Like UnitedHealthcare, Humana’s social determinants of health work started with its Medicaid and Medicare populations. The pandemic kickstarted the Louisville, Ky.-based insurer’s work in this area because of heightened need among beneficiaries, and as a way to spend the required amount on member care as mandated through medical loss ratio, or MLR, during a record period of care deferrals, said Dr. Andrew Renda, associate vice president of population health.

To identify members, Humana’s data governance team standardizes and certifies data from a variety of sources, and then its AI systems extrapolate the findings across individual populations. The insurer conducted 6 million screenings among its members in 2020, and is on-pace to conduct at least that many screenings this year, he said.

“We can find those needles in a haystack without having to directly survey everyone,” Renda said.

To help create individual platforms and dashboards for these clients, the company relies, in part, on a partnership with Microsoft, which has shortened the amount of time it takes to create an individual AI model from six months to a matter of hours, Renda said. This approach differs from UnitedHealthcare, which relies solely on in-house tech to power its operations.

“With employer groups, in some ways it’s more challenging because you’ve got to take them one by one, depending on what their business is, what their employee population looks like, what their average salary is and where they are geographically,” Renda said. “A lot of different factors can influence the prevalence you see in certain things.”

Anthem likewise relies on a mix of in-house tech and outside partners to create unique AI models for its populations. The company pulls information from more than 20 sources to create a dataset made up of 30,000 unique variables and more than 740 million data points associated with health-related social needs, said Rajeev Ronanki, chief digital officer.

The Indianapolis-based insurer then integrates this information with members’ lab and claims data to create a lifetime patient record, which is used both by the payer and provider to highlight members’ prospective social needs. Anthem has so far screened “several tens of thousands” of individuals for health-related social needs this year, Ronanki said. It also uses this system to inform its benefits strategy. After realizing that many members lacked adequate childcare, which was impacting their productivity, the company unveiled added babysitting as a benefit for some of its employer customers.

“Employers are keen to create the right conditions in order to make sure that their employees are the most productive when they come to work,” Ronanki said.

But the use of predictive analytics to screen members for social needs raises concerns over algorithmic bias, or the idea that the biased data fed into the AI system would lead to inaccurate results, said Seals Allers, the founder of Irth, a Yelp-like mobile app that allows people of color to rate and review their prenatal birthing, postpartum and pediatric-care providers.

Irth aims to convert people’s individual experiences into quantitative data that payers and providers can use as a quality measure. The startup has launched pilot programs with a health system in Pittsburg and partnered with Ascension St. John Hospital in Detroit.

Although UnitedHealthcare, Humana and Anthem all boast partnerships with third-parties like the Responsible AI Institute to help guide their algorithms development, Seals Allers stressed that community outreach was critical in making sure that the suggestions surfaced were realistic, and not simply referring an individual to a food bank because of their skin color.

“If we’re creating technology that is really trying to address these issues, the most important thing is making sure that Black and Brown folks, or whoever the target audiences are, are being included in the design and creation of these algorithms,” she said. “So then it’s not perpetuating some of the inequities that we’re trying to resolve.”