Commentary|Articles|October 23, 2025

New CMS RADV audits: The good, the bad and the ugly

Listen
0:00 / 0:00

Strategies for organizations to navigate CMS's decision to fast-track risk adjustment audits.

The U.S. Centers for Medicare & Medicaid Services (CMS) has announced it will expedite risk adjustment data validation (RADV) audits on all Medicare Advantage (MA) contracts each year. This year will be an especially interesting one as the agency works through a seven-year backlog (2018 to 2024) to bring the program up to date. Notably, CMS will also be extrapolating findings of the expanded (but still relatively small) sampling they will perform for each contract.

These changes mark a turning point for MA plans: The old model of retrospective risk management is no longer sustainable. Proactive coding practices using smarter tools, including AI solutions, are essential for navigating this new reality.

As with most CMS initiatives, the implications are a mix of good, bad –– and a lot of unknown (or ugly). Let’s start with …

The good

The good: Bringing RADV audits up to date eliminates the prolonged uncertainty that health plans have long faced.

Previously, plans didn’t know when or even if they would be audited, burdening even the most adherent ones with financial and administrative uncertainty for years after a fiscal plan year closed. Now we can expect a normal pattern of annual audits during the following year.

As an actuary, I can say that having a consistent, known schedule is good for financial forecasting even if the results are imperfect. This allows executives, investors and researchers to get a timely and accurate picture of plan performance.

The bad

The bad is fairly obvious: With every contract being audited and findings extrapolated, the financial exposure will be far greater than in the past. Whereas in any given RADV audit, a health plan could expect a quarter or a third of its plans to be reviewed, now every contract will be. Not only does this create far greater financial exposure on the risk adjustment side, but the administrative burden also will increase accordingly.

Previously, only a fraction of plans faced audits, limiting exposure. Now, the financial and operational stakes are dramatically higher. This will be especially acute in 2025 and 2026, when seven years of data responses must be shared with CMS within a very tight window. Whether CMS can mobilize the resources required to meet its stated goal of completing these audits by early 2026 remains to be seen.

The ugly

That brings us to the ugly — the unknowns: Can CMS realistically collect and process all the data by the stated deadlines? What if they don’t? Even more interesting is that some audits will examine dates of service as far back as 2017, as well as during the COVID-19 pandemic.

This presents significant practical challenges. Most organizations have since changed leadership teams, processes and other related protocols, making consistent and fair assessment complex. Although not technically part of the process, will CMS look favorably on organizations that have evolved to more compliant practices, or will it examine every year independently? Officially, the answer is the latter, but this raises the question of what level of detail CMS can review, given the sheer volume of data and the limited time at their disposal.

This brings me to the biggest unknown: extrapolation. CMS has very clearly stated that it will be extrapolating, as permitted under RADV rules. This means they will be auditing a small, representative sample of claims, then projecting any identified errors across the entire population of claims within that contract. For example, if CMS finds 10% errors in a sampled group, they will assume a similar error rate across all claims for that year, significantly increasing potential financial recoveries — and potential financial exposure for plans.

However, this practice raises several challenges. Take normalization factors, for instance. Normalization is an annual adjustment CMS makes to ensure risk scores average to approximately 1.00 across all Medicare Advantage plans. It accounts for widespread coding trends, ensuring that no plan unfairly benefits or suffers financially from industry-wide coding shifts.

Each year’s normalization is set based on the prior year’s coding and a projection of how that will change in the coming year. One could argue that all coding practices have been factored in and normalized out of the final score for the overall program. Putting aside plans that prove to have egregiously noncompliant coding, how does CMS balance effectively having taken money from one plan and giving it to another, only to then take it all back?

As an example, through normalization, the overall risk score should be 1.00 (theoretically, at least, in MA, this doesn’t happen concurrently because it is done in advance of the year). This leads to two outcomes:

  • A more compliant plan has already been “penalized” by being normalized against noncompliant plans. Will those plans be renormalized and given revenue back? Of course not.
  • A less compliant (but not egregiously noncompliant) plan has already had its score reduced by normalization. Will the extrapolation be “un” normalized to ensure “proper” payment? How would this be done?

To be clear, I am not suggesting that CMS should ignore noncompliant plans and not extrapolate. But extrapolation has historically sparked litigation. Given the stakes, plans will almost certainly challenge CMS findings in court, leading to lengthy disputes and potential negotiated settlements that may significantly alter final financial outcomes. In the end, I suspect that, as with past attempts to extrapolate under prior RADV and Office of Inspector General audits, the negotiated settlements will fall well short of what full extrapolation would net.

In the long-term, there is a positive benefit: restoring RADV audits to their intended purpose, which is recouping overpayments, but also driving compliance and corrective action. This way, plans will be paid on an appropriate and level playing field. Conducting RADV audits years after the fact is purely punitive. It doesn’t offer critical, timely feedback to ensure plans are compliant. Remember, the D and V in RADV stand for data validation. RADV’s original intent was to ensure that payments for the most recent year can be validated — not to allow punishment years later.

It is easy to get caught up in the financial impact of plans and forget that overpayments to one plan mean less money for others. CMS’s goal is to ensure that each plan receives its fair and appropriate slice of the predetermined MA budget.

The key takeaway for providers and health plans is to get proactive. Accurate, prospective coding and documentation, ideally supported by AI technology that uncovers gaps early, guides documentation and enables course correction throughout the year. Those who fail to modernize in the face of this more aggressive RADV landscape will face increasing risk and diminishing returns.

Jonathan Meyers, FSA, MAAA, is the CEO at Seldon Health Advisors and serves on the advisory board for Navina.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


Latest CME