• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

Calculating shared savings: Best practices for achieving accurate estimates

Blog
Article

With a defined set of best practices, replicable estimates of the monetary value is achievable

This article is in the second in a series focused on calculating shared savings from value-based health care improvement solutions. To date, a formal set of best practices does not exist, and further complicating this ambiguity, no one entity is leading the development of best practices for monetization of healthcare value. Confusion and lack of standardization is compounded by rapid growth in value-based contracting between Commercial payers and providers, culminating in an imperative for the industry to align on a consistent set of technical specifications, or best practices, for monetizing value in a healthcare context. This article aims to enumerate key decision points one must evaluate when monetizing value, highlight Coarsened Exact Matching (CEM) as the prominent method for monetizing value, and lastly, identify a set of best practices providers and payers can follow when entering into value-based contracts.

Best Practices for Health Care Value Monetization

The following 10 sections provide insights and guidance for both payers and providers entering into value-based contracts. The content is based my nearly 15 years of experience collaborating with payers, employers, state governments, health benefit consultants, law firms, consulting firms, and other population health program providers. Thus, what follows is derived from actual negotiations, reconciliations, and legal challenges I have been engaged in and certainly learned from over time. While the best practices described here are comprehensive, any entity entering into a value-based contract should carefully evaluate all possible factors influencing monetization of the value expected to be created.

Measured Population

The measured population will ideally consist of only those members who meet a common criterion or set of criteria; the converse is measuring the total population. A primary means to identify the measured population is to focus on those individuals with the specific outcomes of interest. For example, PopHealthCare identifies the top 8% to 12% of the total population for inclusion in their CareSight solution based on the distribution of predictive model scores derived from a machine learning model estimating the likelihood a member will be one of the most expensive members in the next 12 months. CareSight is an in-home medical care service for the most complex members in a health insurer’s population. Measuring the total population is relevant to value-based programs designed to influence the health of every member; the Next Generation Accountable Care Organization (NGACO) pilot is one such program.

Outcomes

Standard outcomes assessed for shared savings or penalties in a value-based contract include (a) total cost of care, defined as billed medical and pharmacy claims submitted by a provider for care rendered to a patient insured by the payer in the value-based contract, (b) acute utilization (CMS place of service 21), (c) 30-day readmissions from index admission for similar admitting diagnosis, and (d) and emergency department visits (generally denoted in billed claims as Revenue Code in range 0450-0459 or 0981).

While typically not outcomes utilized for financial performance assessment, the following measures should be reported by the provider to support observed contractual outcomes1: average length of acute stay, average cost per acute admission, average evaluation and management visits per member, quality measures2, Risk Adjustment Factor scores, member retention with the payer, member satisfaction with the plan and provider, access to primary care and appropriate transportation, evidence of and functional capacity for self-management or assistance available to meet health needs, ability to coordinate care either alone or with assistance, and medication adherence. Lastly, in some value-based contracts, the ability of the provider to demonstrate revenue enhancement for the payer is also a desired outcome.

Outliers

The identification and methodological treatment of members with extreme right or left tail distributional profiles of the outcome is one of the most ambiguous aspects of a value-based contract. A common practice is to censor, or cap, annual total health care expenditures at the 99th percentile level.3 This rule, though, suffers from two principal limitations. First and foremost, a value-based program is designed to minimize the number of members exceeding such a threshold, and for those members who do exceed the threshold, reduce the average total health care cost relative to members not in the program. Accordingly, application of a 99th percentile rule mitigates the monetary value and shared savings a provider yields for a payer by artificially limiting the plan’s expenditure exposure.

Concomitant with a cap on the upper end of the distribution, outlier members may be defined as members with extreme trend values in health care expenditures over time. As with the annual threshold, censoring members – in particular, those not in the value-based program – with extreme cost increases mitigates the value created by a provider for the payer. While this may seem contradictory, providers should never allow right tail distributional capping if they firmly believe their strategy will yield a downward shift in the expenditure distribution of program-attributed members. On the other end of the distribution, though, providers and payers should agree to not allow non-attributed members to realize a negative trend in expenditures over time greater in absolute value than the most negative trend observed in the attributed cohort. The rationale is such that non-attributed members are not expected to realize greater cost savings over time (or generally, improved outcomes) than an intensively managed attributed cohort. To do so, both parties should agree to censor the non-attributed negative trend at the most negative value observed in the attributed cohort.

It is important to note the above discussion focused on a definition of outliers within the context of the outcome under measurement. Outlier members may also be those who have an attribute set shared by only a few other members in the measured population. Individuals with such a profile do not necessarily have “outlier costs,” but their complex medical and behavioral conditions are such that only a very small proportion of members have such factors.

Matching Variable Selection

Noted earlier, the key step in applying CEM to outcomes measurement is identification of the individual-level factors to match the treated and untreated cohorts. The purpose of the matching factors is two-fold. First, to control for observable factors between the two cohorts that explain as much as feasible variability in the outcome over time. Second, to allocate both cohorts uniquely to blocks, or strata, in which the mean of the outcome in the baseline is an accurate reflection of the true distribution of the outcome for both cohorts during this time, and any residual variability is normally distributed with mean zero and variance one. If the matching factors are correctly specified, the trend in the untreated cohort provides the “what if” result to directly compare to the observed trend in the treated cohort and thus, enable unbiased estimation of the program effect.

The recommended approach to choose matching variables is to rely on a mix of selection strategies. Principally, parties to a value-based contract should rely on (a) hypothesis based approach that in turn draws from the literature, analyst experience, and executive input; (b) statistical analysis of historical plan data focusing on factors explaining variability in baseline as well as trend estimates of the measured outcome; (c) empirical analysis of the trend observed in randomly drawn “treated” and “untreated” members from the untreated cohort; and (d) observation of the matching factor set that yields the lowest, if not zero, number of strata in which the trend in the “treated” cohort is positive yet the “untreated” cohort is negative (using the same randomly drawn members from (c)). Noted earlier, there are automated statistical packages for empirically selecting the set of matching variables to utilize within CEM4; my experience with the automated packages, though, is that the selection and binning algorithms are not suitable for most viable matching variables evaluated in a shared savings analysis.

Parallel Trends, Baseline and Report Period

One of the most important assumptions in a difference-in-difference methodology concerns stability of the outcome trend between cohorts in years prior to initiation of the value-based program (i.e., baseline). In my experience, tests of the parallel trend assumption are difficult to implement due to the requirement of extensive historical data. Specifically, in the value-based programs I have evaluated, a member’s baseline is a function of their attribution or enrollment status. For members not attributed or enrolled in the program, baseline is the most recent twelve months from the first identification date in which the member became eligible for the program. Attributed or enrolled members, though, have their baseline as the most recent twelve months from the attribution or enrollment date. In many instances, the baseline period will be more than a year before the current reporting or measurement period, which in turn means the baseline start date is twelve additional months back in time; to add an additional twelve or more months in order to conduct the parallel trends assumption requires extensive historical plan data on the membership combined with such data existing for both cohorts of members.

The report or measurement period is the window of time during which the value of the program is quantitively assessed for determining success or failure, and accordingly, whether value will be monetized in the form of shared savings or penalties. The report period is typically defined by a twelve-month calendar year. Interim measurements, such as quarterly, should be conducted in order to ensure both parties to the value-based contract are aligned on the outcomes reconciliation methodology, have similar if not the same data, and thus, derive similar outcome results.

Provider Insight to Patient Total Cost of Care

In many contractual settings, providers are disadvantaged by lack of insight to utilization occurring outside of their practice. Provider inability to account for and adjust patient-specific care plans based on care their patients receive elsewhere is compounded by unknown incurred paid claims the provider is financially responsible for within a value-based contract. Providers, then, are burdened with asking uncomfortable questions of their patients concerning care received outside their practice, especially care not authorized or recommended by the provider. To mitigate adverse consequences to patient continuity of care, the patient-provider relationship, and financial obligations a provider must satisfy with a contracted payer, providers must contract for monthly data feeds from the payer of their patient panel on elements such as adjudicated medical and pharmacy claims, demographics, and benefits eligibility.

Plan Design

One of the most important attributes to control for within an analysis monetizing shared savings and penalties in a value-based contract is the benefits framework of the insurance plan selected by an individual. Specifically, both the provider and the payer should ensure data is made available on a frequent basis regarding the monthly premium, deductible, coinsurance rate, and out-of-network fees associated with selected plans. Given that value-based contracts are typically based on paid claims, as opposed to allowed claim amounts, plan attributes listed above directly influence the exposure of both providers and payers to an individual’s health care utilization. As a general example, low deductible plans increase provider and payer exposure to paid claims since individuals with these plans will be paying for healthcare based on a coinsurance rate rather than dollar-for-dollar sooner in the year; this is especially true for individuals with elevated health care needs due to multiple chronic conditions, which is a key cohort of a payer’s population to be contracted for within a value-based setting. Payers, though, are protected financially from the increased financial exposure by higher premiums; in contrast, providers have no such protection.

Competing and or Complimentary Health Improvement Programs

Similar to the aforementioned imperative that providers contract with payers for monthly data feeds on elements related to their panel’s total cost of healthcare, it is incumbent upon providers to have objective insight to competing or complimentary health improvement programs their patients may be participating in via programs offered by the payer. Such programs include case management, chronic condition management, in-home primary care, home health, and well-being improvement. With access to such information, providers can update a patient’s care plan to account for the supplemental activity and expected accrued benefits. External to enhancing the patient-provider relationship, which in turn increases the likelihood of maximizing value, data related to patient participation in health improvement programs should be a core component of features controlled for within an analysis monetizing shared savings and penalties. In my experience reconciling financial outcomes in value-based contracts, a direct relationship exists between the number of health improvement programs an individual is engaged in and the magnitude of shared savings. In other words, by not accounting for such programs in the monetization analysis, provider shared savings are downwardly biased.

Provider Reimbursement Contracts

A final primary best practice to be cognizant of concerns the specific contracting rates payers negotiate with providers for billable services. In line with the Plan Design section, providers in a value-based contract should be aware of how their rate compares to other contracted providers for similar service provision as the fee structure impacts the speed with which the deductible is met (and the individual now pays at a coinsurance rate). It is important to remember that non-capitated value-based contracts are predicated on a fee-for-service model; accordingly, a provider’s fee schedule directly impacts total cost of care and thus the ability to meet financially based value metrics. In a capitated value-based contract with no upside related to total cost of care, this consideration is not important; however, such contracts are rare.

Supplemental Considerations

In addition to the above methodological factors to consider when monetizing outcomes within a value-based contract, the following factors should be evaluated in terms of outcome sensitivity based on inclusion, exclusion, or varying input levels: age restrictions, current and historical eligibility, within year cost minimums (e.g., is $0 a reasonable annual spend level), runout period, claim adjustments, minimum plan and program exposure, pandemic events and other significant public health realities, and last, state and federal health policy changes. Lastly, evaluation of sample size to yield robust estimates of measured outcomes should be conducted, especially in value-based contracts involving specific sub-cohorts of members, reliance of wide array of member attributes, or outcomes with high variability.

Conclusion

With a defined set of best practices guiding the contracting process for entities entering into a value-based agreement, standardized, robust, high fidelity, and replicable estimates of the monetary value derived from value enhancing health care practices is achievable. Moreover, with a defined set of best practices, the contracting entities will be able to enter into value-based contracts with the confidence that the agreed upon capitation rate, program fee, and gain share or penalty assessment levels are based on the best information and methods available. By leveraging the best practices referenced here with the validated methodology Coarsened Exact Matching, providers, payers, and patients will be assured the common aim initiated by CMS for value-based healthcare will be realized – the aim to reform how health care is delivered and paid for in order to achieve better care for individuals, better health for populations, and lower cost.5

Aaron R. Wells, PhD is a Principal at InfoWorks, a Nashville, Tennessee based consulting firm providing services in the areas of management, data science, technology and custom application development. Aaron has extensive expertise in value-based healthcare outcomes monetization, machine learning, and quasi experimental outcomes research. Aaron has helped firms transition to value based care, from the early days of the CMS pilots and Pioneer ACO model to today's Alternative Payment Models.


References

The listed outcome measures are generally referenced as “plausibility metrics” due to observed values for these metrics providing the payer a gauge of the reasonability or plausibility that value – which is measured by the absence of data – was created.

1. See for example: https://innovation.cms.gov/files/x/nextgenaco-benchmarkmethodology-py4.pdf

2. See for example: https://innovation.cms.gov/files/x/nextgenaco-benchmarkmethodology-py4.pdf

3. See https://scholar.harvard.edu/files/gking/files/cem.pdf, https://rdrr.io/cran/cem/man/L1.profile.html, and https://www.rdocumentation.org/packages/cem/versions/1.1.20/topics/imbspace.

4. See https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/Value-Based-Programs.

5. See https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/Value-Based-Programs.

Related Videos