Home/Insights/Article
Policy AnalysisASMCMSSpecialists

The Ambulatory Specialty Model Explained

What CMS's Mandatory Test Means for Specialists

By VBCA• Published January 27, 202616 min read

A comprehensive analysis of ASM's structure, scoring, and strategic implications—and why the preparation window matters.

Introduction

The 2026 Physician Fee Schedule Final Rule established the Ambulatory Specialty Model—a mandatory, five-year test of specialist accountability within traditional Medicare. The model launches January 1, 2027, and runs through December 31, 2031, followed by two payment reconciliation years.

ASM represents the first large-scale effort by CMS to hold specialists in ambulatory settings financially accountable for upstream management of chronic conditions that drive high spending and avoidable hospitalizations. It focuses initially on heart failure and low back pain—two conditions that together account for roughly 6% of Medicare Parts A and B spending and are predominantly managed by specialists.

The model builds on the MIPS Value Pathway framework but differs in key ways: mandatory participation in selected areas, peer-relative benchmarking within specialty, and payment adjustments reaching ±12% of Part B revenue. This isn't MIPS with higher stakes. It's a structural test of whether specialist accountability can bend the cost curve.


Why ASM Exists

What CMS Is Trying to Accomplish

CMS positions ASM as a key pillar of the Innovation Center's 2025 Strategy, which emphasizes prevention, transparency, and competition to strengthen Medicare's fiscal sustainability. The model has four explicit objectives:

1. Engage specialists in value-based care. Historically, ACO and bundled-payment models concentrated on hospitals and primary care. Specialists—who manage most chronic disease spending—have largely operated outside accountability structures. ASM changes that.
2. Increase two-sided risk. MIPS created modest incentives. ASM creates meaningful ones. Clinicians will be rewarded for improving quality, lowering costs, and coordinating with primary care—and penalized significantly if they don't.
3. Simplify measurement through condition-specific pathways. Rather than the measure-selection complexity of traditional MIPS, ASM uses fixed measure sets aligned to specific conditions. This enables peer comparison and reduces gaming.
4. Generate generalizable evidence. By making the model mandatory in randomly selected areas, CMS can evaluate specialist accountability without selection bias. Voluntary models suffer from participants who self-select because they expect to perform well. Mandatory participation produces cleaner evidence.

The Policy Context

ASM doesn't exist in isolation. It's part of a broader CMS strategy that includes:

The through-line: CMS is building infrastructure for specialist accountability across multiple models. Organizations that develop capabilities for ASM will find those same capabilities valuable for LEAD arrangements and future models.

The implication: If ASM proves effective, it could expand to diabetes, COPD, chronic kidney disease, or oncology—conditions that also drive high Medicare spending. The infrastructure you build now may serve you across multiple future models.


Who Must Participate

Geographic Selection

ASM participation is determined geographically. CMS randomly selected approximately 25% of the 935 Core-Based Statistical Areas (CBSAs) and Metropolitan Divisions nationwide—235 areas total.

The selection wasn't purely random. CMS stratified areas into six cohorts based on:

  • Average Medicare Parts A and B spending
  • Episode volume for heart failure and low back pain
  • Metropolitan status

Within each cohort, roughly 40% of areas were selected. This ensures the mandatory population represents diverse cost environments, not just high-spending markets.

The mandatory areas span all 50 states. Texas has the most (16 areas), followed by California (12), Florida (12), and New York (11). But even smaller states have mandatory CBSAs—this isn't concentrated in a few regions.

CMS will release the preliminary participant list in early 2026 and finalize it around July 2026. If your practice is in a selected area and you meet the eligibility criteria, participation is not optional.

Check if your area is mandatory →

Specialty and Condition Focus

ASM targets two condition cohorts:

Heart Failure Cohort:

AttributeSpecification
Eligible specialtyCardiology
Condition focusHeart failure management
Episode triggerTwo Part B claims billed by the same participant within 180 days
Attribution threshold≥30% of Part B services within the episode

Low Back Pain Cohort:

AttributeSpecification
Eligible specialtiesOrthopedic surgery, Neurosurgery, Pain management, Anesthesiology, Physical medicine & rehabilitation
Condition focusLow back pain management
Episode triggerICD-10 diagnosis code for low back pain + confirming code within 60 days from same participant
Attribution threshold≥30% of Part B services within the episode

CMS determines your specialty based on the code used most frequently on your Medicare Part B claims. If you bill under multiple specialties, the plurality specialty governs.

Volume Thresholds

To face payment adjustments, you must treat at least 20 attributed episodes in the relevant condition during the performance year.

Attribution matters: Episodes only count toward your volume if you bill at least 30% of the Part B services within that episode. Clinicians who see a patient once or provide limited services won't reach this threshold.

What if you don't meet the threshold? You remain an ASM participant (tracked for evaluation purposes) but don't receive payment adjustments for that year. Instead, you must report through MIPS. This can happen year to year—you might qualify for ASM adjustments in 2027, fall below threshold in 2028, and return to ASM in 2029.

The implication for low-volume specialists: Because episode counts can be small—especially in lower-volume practices—ASM scores may swing sharply based on relatively few patients. A handful of high-cost episodes can significantly affect your position.

Notable Exceptions

  • RHCs and FQHCs: Clinicians whose services are billed exclusively under RHC/FQHC methods are not eligible
  • Critical Access Hospitals: Clinicians in CAHs paid under Method I are included if they otherwise meet criteria
  • Multiple TINs: Clinicians associated with multiple TINs who meet criteria under each are treated as separate ASM participants for each TIN/NPI combination
  • Specialty code changes: If your specialty code changes mid-year, CMS uses the code appearing most frequently on your Part B claims

How Scoring Works

The Composite Score

Each ASM participant earns a final composite score from 0 to 100. This score determines payment adjustments two years later. The score combines four categories:

CategoryWeight/AdjustmentPurpose
Quality50%Condition-specific outcome and process measures
Cost50%Episode-based cost relative to peers
Improvement Activities0 to −20 pointsPrimary care coordination requirements
Promoting Interoperability0 to −10 pointsHealth IT and data exchange requirements

Why this weighting? CMS expects most clinicians to perform well on Improvement Activities and Promoting Interoperability based on historical MIPS data. If these categories carried large positive weights, scores would cluster—making it hard to differentiate performance. By weighting Quality and Cost at 50% each (where variation is greater) and making IA and PI penalty-only, CMS ensures the scoring produces meaningful distribution.

The implication: Equal weighting of quality and cost means even modest improvements in care or reductions in unnecessary utilization can meaningfully shift payment. This isn't a compliance exercise—it's a performance exercise.

Quality Scoring (50%)

Quality accounts for half your score. Each measure yields 1–10 achievement points based on your performance relative to other ASM participants treating the same condition.

The scoring is decile-based: if you're in the top 10% for a measure, you receive 10 points. Bottom 10% receives 1 point. Points are averaged across measures to produce a quality sub-score of 0–50.

A critical difference from MIPS: Benchmarks are recalculated annually from actual ASM participant performance—not historical MIPS data. This creates a fundamental shift in how you experience the model.

MIPS BenchmarkingASM Benchmarking
Based on historical data
Benchmarks set from prior years' performance
Based on current-year participants
Benchmarks calculated after everyone submits
You know the target
Can plan strategy around known thresholds
The target is unknowable
Can't predict where the deciles will land
Fixed standard
Your 85% performance = 85%, regardless of peers
Moving target
Your 85% might be 9th decile... or 4th decile

You won't know your outcome until everyone reports. You submit data throughout 2027, CMS calculates benchmarks from the participant pool, and in late 2028 you learn where you landed—and whether you're facing a payment increase or cut in 2029. There's no way to calculate your outcome in advance because the benchmark doesn't exist until after submission closes.

Data completeness requirement: You must report at least 75% of eligible cases for each measure. Any measure that doesn't meet this threshold automatically receives 0 achievement points, regardless of actual performance. Not meeting data submission requirements for the quality category triggers the maximum negative payment adjustment.

Cost Scoring (50%)

Cost also accounts for 50% of your score. CMS uses Episode-Based Cost Measures (EBCMs) specific to each condition cohort.

Scoring is anchored to the median episode cost among participants:

  • Score of 6 = median cost
  • Scores above 6 = below-median cost (better)
  • Scores below 6 = above-median cost (worse)

Cost Scoring Benchmark Methodology:

Score RangeHow It's Calculated
10Median cost − 1.5 standard deviations
9–9.9Median cost − 1.25 SD
8–8.9Median cost − 1.0 SD
7–7.9Median cost − 0.5 SD
6–6.9Median cost (0 SD)
5–5.9Median cost + 0.5 SD
4–4.9Median cost + 1.0 SD
3–3.9Median cost + 1.5 SD
2–2.9Median cost + 2.0 SD
1–1.9Median cost + 2.5 SD

The benchmark ranges are calibrated so that clinicians with costs near the median receive scores close to the middle of the scale. Higher or lower scores are determined by linear interpolation within each range.

Risk adjustment: Episode costs are adjusted for patient health status (HCC risk scores) and the participant's specialty role in the episode. You're compared to peers managing similar patient complexity, not raw dollar amounts.

The implication: Understanding your cost position requires seeing episode-level data, not just aggregate spending. The patients appearing in your high-cost episodes are signaling specific clinical patterns—unmanaged comorbidities, care coordination gaps, utilization choices. Addressing costs means understanding those patterns.


Quality Measures in Detail

ASM uses fixed measure sets for each cohort. Unlike MVPs, you don't select which measures to report—all measures are required.

Heart Failure Quality Measures

MeasureIDCollectionDomainWhat It's Measuring
Risk-Standardized Acute Unplanned Cardiovascular-Related Admission RatesQ492ClaimsExcess UtilizationAre your heart failure patients being hospitalized for preventable cardiovascular events?
HF: Beta-blocker therapy for LV systolic dysfunctionQ008eCQM/MIPS CQMEvidence-Based CareAre patients with HFrEF receiving beta-blockers?
HF: ACE Inhibitor/ARB/ARNI therapy for LV systolic dysfunctionQ005eCQM/MIPS CQMEvidence-Based CareAre patients receiving guideline-directed medical therapy?
Controlling High Blood PressureQ236eCQM/MIPS CQMEvidence-Based CareIs blood pressure controlled in your heart failure population?
Functional Status Assessments for Heart FailureQ377eCQMPatient-Reported OutcomesAre you collecting patient-reported functional status?

Note on Q377: CMS included the process version (did you assess functional status?) rather than the performance version (did functional status improve?). CMS stated it continues to explore whether a patient-reported performance measure would be appropriate. Expect this to evolve.

VBCA has developed CMS-approved QCDR measures relevant to heart failure management that complement these ASM requirements.

Low Back Pain Quality Measures

MeasureIDCollectionDomainWhat It's Measuring
Use of High-Risk Medications in Older AdultsQ238eCQM/MIPS CQMEvidence-Based CareAre older patients avoiding medications with high adverse event risk?
Screening for Depression and Follow-Up PlanQ134eCQM/MIPS CQMEvidence-Based CareAre chronic pain patients being screened for depression?
Body Mass Index (BMI) Screening and Follow-Up PlanQ128eCQM/MIPS CQMEvidence-Based CareAre patients screened for obesity, which exacerbates low back pain?
Functional Status Change for Patients with Low Back ImpairmentsQ220MIPS CQMPatient-Reported OutcomesIs functional status improving over time?
To be determined in CY 2027 rulemakingCMS wants an imaging utilization measure but faced development concerns

Note on imaging: CMS did not finalize the proposed MRI Lumbar Spine measure for low back pain, though they stated they believe in assessing inappropriate MRI overuse. The measure was still in development. Expect CMS to revisit this.

The implication: The low back pain measure set focuses heavily on screening behaviors that may feel tangential to procedural practice. If you're a spine surgeon or interventional pain specialist, these measures are asking whether you're managing the whole patient—not just performing procedures. That's intentional.


The Operational Requirements Most Practices Aren't Ready For

Improvement Activities: The Hidden Complexity

Rather than awarding bonus points, CMS applies penalties up to −20 points for failure to complete required activities. This is where most specialists will face the steepest learning curve.

The Improvement Activities category has two components, both measured at the TIN level.

Component 1: Connecting to Primary Care and Social Needs

ASM participants must have processes, workflows, or technology that ensure:

Primary care connection:
  • Confirm each ASM beneficiary has access to a primary care provider
  • If the patient doesn't have a PCP, help them find one
  • This isn't documentation—it's an active requirement to connect patients to primary care
Communication back to primary care:
  • Send relevant clinical information to the beneficiary's PCP after each ASM visit
  • This means referral notes, test results, treatment changes, follow-up plans
  • Requires either direct EHR integration, fax workflows, or a health information exchange connection
Health-related social needs (HRSN) screening:
  • Determine whether each ASM beneficiary has received an annual HRSN screening in primary care
  • If not, either encourage the PCP to conduct it or perform the screening yourself
  • HRSN screening covers food insecurity, housing instability, transportation, utilities, and interpersonal safety

The implication: If you don't currently ask your patients whether they have a primary care doctor—or what happens to them between your visits—this requirement will feel foreign. CMS is explicitly testing whether specialists can function as part of a care team, not just as episodic consultants.

Component 2: Collaborative Care Arrangements with Primary Care

ASM participants must establish at least one formal collaborative care arrangement with a primary care practice that shares ASM beneficiaries.

The arrangement must include at least 3 of these 5 elements:

ElementWhat It Means
Data sharingBi-directional exchange of patient information—test results, treatment plans, follow-up needs. Not just sending notes; receiving information back.
Co-management protocolsDefined approaches for shared management of complex or chronic patients. Who handles what? When do you escalate?
Transitions in careProtocols ensuring seamless movement between providers or care settings—hospital discharge, SNF transitions, referral handoffs.
Closed-loop communicationClear referral pathways and processes for managing referrals between parties. The referring PCP knows what happened; you know why the patient was sent.
Care coordination integrationEmbedded care-coordination processes within your workflow—not a parallel system, but part of how you practice.

This is new. Such collaborative care arrangements are not required today under MSSP ACOs. Many specialists have informal relationships with referring primary care physicians but no formal arrangement that includes defined data sharing protocols or co-management agreements.

To avoid the full −20 point penalty, you need to demonstrate:

  1. Patient-level activities (PCP connection, communication, HRSN verification) across your ASM population
  2. At least one practice-level collaborative care arrangement meeting 3 of 5 elements

Performance is assessed based on at least 180 days of data from the calendar year two years before the payment year. If you're being scored on 2027 performance, CMS looks at your 2025 data—which means the infrastructure needs to exist now.

The implication: If you've operated as a referral-based practice without structured primary care relationships, you have 12 months to build them. This isn't something you implement in Q4 2026.

Promoting Interoperability: The −10 Point Penalty

CMS applies a penalty up to −10 points for failing to meet health IT requirements. The measures mirror MIPS PI requirements, and reporting can occur at the TIN level.

Required:

  • Certified EHR Technology (CEHRT) use
  • Security Risk Analysis each year
  • Data exchange via one of three options (electronic referral loops, HIE bi-directional exchange, or TEFCA-enabled exchange)
  • Patient electronic access to health information

Key measures and point allocations:

ObjectiveMeasureAvailable Points
e-Prescribinge-Prescribing + PDMP query10–20 points
Health Information ExchangeSend/receive referral loops OR HIE bi-directional OR TEFCA15–30 points
Provider to Patient ExchangePatient electronic access1–25 points
Public HealthImmunization registry + electronic case reporting25 points

Most practices already meeting MIPS PI requirements should meet ASM requirements without significant additional work. The burden here is lighter than the Improvement Activities requirements.


Payment Adjustments

The Risk Schedule

CMS adopted an escalating risk schedule:

Performance YearPayment YearMaximum Adjustment
20272029±9%
20282030±9%
20292031±10%
20302032±11%
20312033±12%

Adjustments apply to all Medicare Part B covered professional services delivered by the participant in the payment year.

Note that ±9% is where MIPS maxes out. ASM starts there and escalates to ±12%—a meaningful increase in financial exposure.

The Redistribution Percentage

Unlike MIPS (which is budget-neutral), ASM includes a 15% redistribution percentage. CMS withholds 15% of the total at-risk pool and returns it to the Medicare Trust Fund before distributing gains and losses.

This means ASM is designed to save Medicare money regardless of aggregate participant performance. CMS estimates the model could generate 3–4% net savings if specialists achieve modest improvements in preventable hospitalizations and imaging utilization.

The implication: The redistribution percentage effectively means that more than 1% of participants' Part B spending flows back to Medicare by design. The model isn't just testing performance improvement—it's structured to generate savings regardless of how well participants do collectively.

From Score to Payment

Final composite scores (0–100) are transformed into payment adjustment factors using a logistic exchange function—an S-curve that:

  • Compresses extreme scores to prevent outsized gains/losses
  • Centers neutral adjustment around a score of approximately 50
  • Creates proportionate adjustments for moderate performers while limiting volatility at the extremes

This differs from MIPS, which uses a linear function. The logistic curve means that scores close to the median produce larger payment differentials than scores at the extremes. The practical effect: moderate performance differences yield meaningful payment differences.

Because ASM has no fixed performance threshold, your payment outcome depends entirely on how you perform relative to peers that year. If everyone improves, you still need to outperform to receive positive adjustments. This is a tournament, not a test with a passing grade.

The Two-Year Lag

There's a two-year delay between performance and payment:

Performance YearData SubmissionPayment Year
2027March 31, 20282029
2028March 31, 20292030
2029March 31, 20302031
2030March 31, 20312032
2031March 31, 20322033

This lag accommodates data validation and benchmark calculation but creates planning challenges. By the time you see your first payment adjustment, you're already two years into the model.


Adjustments and Safeguards

Complexity and Practice Size Adjustments

CMS recognizes that some practices treat more medically or socially complex populations. After all category scores are calculated, the following adjustments can be applied:

AdjustmentEligibilityMaximum Boost
Complex-patient adjustmentHCC risk or dual-eligible share at or above median+10%
Small-practice adjustment≤15 clinicians in practice+10%
Solo practitioner adjustmentSingle clinician practice+15%

These adjustments help offset downside risk for practices that might otherwise be penalized for patient mix or structural constraints.

Note: CMS declined to establish a separate rural adjustment, stating it would be duplicative. Many rural practices will already qualify for complex-patient or small-practice adjustments.

Small Practice Exception for Quality Reporting

Practices with 15 or fewer clinicians may report quality at the TIN level rather than the individual level. This reduces data aggregation burden. TIN-level reporting is also allowed for Improvement Activities and Promoting Interoperability.

Safeguards and Relief Mechanisms

Extreme and uncontrollable circumstances: CMS established policies for neutral adjustments during declared disasters or major system outages. If circumstances beyond your control prevent accurate data submission or affect care delivery, you can apply for relief.

Appeals and error correction: A structured review period follows preliminary score publication. Clinicians have 30 days from the date their performance report is issued to file an appeal. This window covers data errors, attribution disputes, or measure calculation concerns.

Information blocking: Information blocking provisions are not included in ASM's definition of meaningful use, reducing one compliance complexity.


CMS Support and Resources

To promote successful adoption, CMS committed to several support mechanisms:

Interim feedback reports: Periodic updates showing preliminary quality and cost performance before final scoring. These allow you to course-correct during the performance year rather than waiting for retrospective results.

Technical assistance: Webinars, office hours, and help-desk support for reporting requirements, measure specifications, and submission procedures.

Data transparency: De-identified aggregate performance data by specialty and region. You'll be able to see how your cohort performs overall, even if individual participant data remains protected.

Waivers and safe harbors: To facilitate collaborative-care arrangements and shared savings within Stark and anti-kickback parameters. This is important for building the primary care relationships ASM requires without running afoul of fraud and abuse rules.

Learning network: A structured forum for participants to share best practices and implementation lessons. These networks have proven valuable in other CMMI models for spreading operational knowledge.

What CMS is not providing (yet): Retrospective cost data and real-time improvement activity data that practices may be accustomed to receiving. If you've been relying on CMS feedback reports to understand your cost position, you may need alternative infrastructure to get that visibility during the performance year.


Strategic Implications

For Cardiology Practices

Heart failure management is core to cardiology—ASM aligns financial incentives with clinical mission. The measures are familiar (beta-blockers, ACE/ARB therapy) but the scoring context is new.

Key considerations:

  • Episode cost visibility becomes essential, not optional
  • Hospitalization rates directly affect both quality (Q492) and cost scores
  • Patient-reported functional status collection needs operational integration
  • The collaborative care requirement may be new territory for practices without formal PCP relationships
  • Echocardiogram utilization isn't directly measured yet, but cost sensitivity to imaging is implicit

For Pain Management and Spine Specialists

The low back pain cohort presents different challenges. Multiple specialties compete within the same benchmarking pool, and the measures focus on screening behaviors that may feel tangential to procedural practice.

Key considerations:

  • Depression and BMI screening may require workflow changes
  • The functional status outcome measure (Q220) requires baseline and follow-up assessments
  • CMS is clearly interested in imaging utilization—expect future measures targeting this
  • Anesthesiologists in pain management may find some measures poorly fitted to their patient encounters
  • Being scored alongside orthopedic surgeons and neurosurgeons means understanding how your episode costs compare to theirs

For Health Systems and ACOs

ASM creates both alignment and complexity for integrated organizations.

Opportunities:

  • Specialist incentives now align with ACO cost targets
  • Shared infrastructure can support both programs
  • Care coordination investments serve multiple purposes

Challenges:

  • ASM and MSSP use different attribution methodologies
  • Clinicians may bounce between ASM and MIPS based on annual volume
  • The collaborative care requirements assume a PCP relationship exists—ACOs need to facilitate this

CMS explicitly encourages organizations to "layer" ASM incentives with ACO arrangements. Under LEAD (the ACO REACH replacement launching in 2027), ACOs will have formal mechanisms to engage specialists in episode-based arrangements. ASM participants will be natural partners.

The implication: Integrating care across specialties and aligning with ACO partners can prevent duplication and optimize cost scores. The organizations that treat ASM as part of their value-based care strategy—not a separate compliance exercise—will have advantages.

The Preparation Window

CMS believes a 2027 launch provides sufficient preparation time because ASM closely mirrors existing MIPS infrastructure. This is true for reporting mechanics but misses the operational changes required:

  • Primary care coordination isn't something you implement in Q4 2026
  • Episode cost visibility requires data infrastructure and clinical education
  • Functional status collection needs workflow integration and patient engagement
  • Collaborative care arrangements take time to negotiate and operationalize

Organizations that treat 2026 as a preparation year—not a waiting year—will have structural advantages when performance measurement begins.


What We're Watching

Episode Cost Data Availability

In the 2025 Physician Fee Schedule, CMS committed to making episode-level cost data available via API—not just as retrospective batch reports. If this materializes, practices could query patient-level episode costs in near-real-time.

This changes the game from "find out how you did after the fact" to "manage episode costs as they happen." We're building infrastructure to integrate with these feeds as they become available.

Measure Evolution

The low back pain measure set is incomplete (one TBD slot). CMS clearly wants an imaging utilization measure but faced development concerns. Expect continued evolution in both cohorts as CMS learns what's measurable and meaningful.

CMS also noted that while cardiology is the primary specialty for heart failure, other specialties may play significant roles. If data show additional subspecialties can be fairly compared, CMS may expand eligible specialty types.

Expansion Potential

CMS frames ASM as a test. If it achieves savings targets while maintaining quality, expect expansion to additional conditions. Diabetes, COPD, chronic kidney disease, and oncology are logical candidates—all high-spend conditions with specialist involvement and measurable episodes.

The infrastructure you build for ASM may serve you across future models.


Closing

The Ambulatory Specialty Model isn't a minor policy update. It's a structural test of whether financial accountability can change specialist practice patterns at scale.

For practices in mandatory areas, participation isn't optional—but preparation is a choice. The episode data that will determine your 2027 performance is structurally the same as data available today through MIPS cost measures. The question is whether you're using the preparation window to understand where you stand.

The organizations treating this window as a learning opportunity—not just a compliance exercise—will have a head start when "voluntary" becomes "mandatory."



SOURCES