FDA CDS Guidance: What’s Changed and Who Cares

What the 2026 FDA CDS guidance covers (and what it doesn’t)

The FDA’s January 2026 update to its Clinical Decision Support (CDS) guidance is not a wholesale rewrite, but it materially tightens how the agency explains scope, risk, and expectations, particularly for software that influences clinical judgment. The document sits at the intersection of software policy, SaMD, and AI governance, and is intended to help developers determine whether their product remains non-device CDS or crosses into regulated medical device software.

At a high level, the guidance applies to software functions that analyze, process, or present medical information to support decisions by healthcare professionals. It does not regulate clinical practice, nor does it impose requirements on clinicians’ use of tools. Instead, it focuses on developer responsibility: how software is designed, labeled, validated, and monitored.

Equally important is what the guidance does not cover. It does not introduce a new approval pathway, mandate premarket submissions for all CDS, or automatically regulate AI-based analytics. Nor does it apply to administrative, billing, scheduling, or general wellness software that lacks a clinical decision component. The update is primarily about classification clarity and documentation discipline, not enforcement expansion.

Intended use and “clinical decision support” scope

The FDA continues to anchor CDS classification in intended use, and not in underlying technology. A product is considered CDS when it provides information that is intended to support clinical decision-making about diagnosis, treatment, prevention, or management of disease. This includes risk scores, care suggestions, prioritization cues, and patient-specific insights, if they are meant to influence a clinician’s judgment.

Crucially, the 2026 guidance reiterates that how the product is described in labeling, instructions for use, marketing materials, and even sales demos, matters as much as what it technically does. Software that claims to “recommend,” “determine,” or “optimize” care pathways is more likely to fall within CDS scope than tools that simply display or organize information. The scope is clinician-facing by default, but patient-facing tools can also qualify as CDS if they meaningfully shape clinical decisions indirectly, such as by generating outputs intended for clinician review.

CDS vs AI/ML-enabled analytics

A central clarification in 2026 is that AI or ML does not, by itself, trigger regulation. The FDA explicitly separates how an output is generated from what role that output plays in decision-making. Statistical models, rules engines, and generative AI systems are all evaluated through the same lens: does the software provide actionable clinical insight, and can a clinician independently assess its basis?

The guidance avoids model-specific language and instead emphasizes explainability and user understanding. AI-enabled analytics that remain transparent and reviewable may stay outside device regulation, while non-AI tools that obscure rationale may fall inside it.

What changed in 2026

Updated definitions and examples

The 2026 FDA update sharpens CDS interpretation by replacing ambiguous edge cases with more explicit definitions and examples. The practical effect is that more products can self-classify with fewer “it depends” moments, especially for tools that generate risk scores, triage lists, or patient-specific suggestions.

A key clarification is that CDS classification hinges on intended influence, not on tone. Software does not avoid CDS status simply because it phrases outputs as “insights” or “information.” If the function is intended to shape diagnosis, treatment, prevention, or management decisions, it sits within CDS scope. The updated examples make this easier to apply to modern workflows, including dashboards that rank patients by deterioration risk, systems that propose next-step testing, and tools that summarize clinical records into decision-ready statements.

The guidance also draws a cleaner line around “mere presentation.” Tools that organize, filter, or display medical information without interpretation remain more likely to be outside device scope, provided the labeling does not imply that the tool is making clinical judgments. In other words, the examples now explicitly link classification to how outputs are framed in product materials, not only to internal logic.

Risk framing and “independent review” expectations

The most operational change is how the FDA describes independent review. Prior versions often treated independent review as a conceptual test; the 2026 language frames it as an expectation that must be met in real clinical use. The underlying question is: can a clinician reasonably assess the basis for the output and decide whether to act on it? Independent review does not require full algorithmic transparency. It also does not require access to model weights, source code, or training datasets. Instead, it requires that the product provides enough understandable context (inputs used, clinical rationale, evidence links, or a transparent logic pathway), so that a clinician can judge appropriateness using medical knowledge. Outputs that appear as opaque “answers” without interpretable basis are more likely to be treated as higher risk.

The guidance also shifts risk framing toward over-reliance and workflow effects, not merely correctness. A tool can be risky if it predictably induces clinicians to defer judgment, even if its average accuracy is acceptable.

CDS vs medical device software: decision tree

When your product becomes regulated device software

Under the 2026 guidance, the key question is no longer whether software touches clinical data, but whether it crosses the line from support to decision-making in a way that a clinician cannot reasonably override or independently assess. The FDA frames this as a functional decision tree rather than a technical test.

A CDS function is more likely to be considered regulated medical device software when it does one or more of the following:

  • Generates specific treatment, diagnostic, or triage recommendations rather than general information
  • Prioritizes patients or actions in a way that materially shapes clinical workflow
  • Automates decisions or default actions without meaningful clinician intervention
  • Produces outputs whose clinical rationale cannot be readily understood or reviewed

By contrast, software is more likely to remain non-device CDS when it clearly positions itself as informational support, allows clinicians to review the underlying inputs and logic, and does not present outputs as authoritative or prescriptive. Enforcement discretion continues to apply, but only when these conditions are credibly met and documented.

Importantly, the guidance emphasizes that reliance risk matters. If real-world use predictably leads clinicians to defer judgment, even unintentionally, the FDA may view the function as device-like regardless of disclaimers.

Typical “gotchas” in labeling and claims

One of the most common reasons products cross into regulated territory is not functionality, but language. The 2026 guidance makes clear that FDA will consider labeling, websites, pitch decks, screenshots, and training materials when assessing intended use.

Terms such as “recommend,” “determine,” “optimize,” “identify the best treatment,” or “ensure appropriate care” frequently trigger device classification, even when developers believe the tool is advisory. Similarly, claims about reducing errors, improving outcomes, or standardizing decisions can imply clinical authority if not carefully framed.

Another frequent pitfall is inconsistency: internal documentation may describe a tool as supportive, while marketing materials portray it as decisive or autonomous. In FDA’s view, these inconsistencies undermine non-device CDS positioning. The guidance implicitly encourages teams to align product behavior, labeling, and sales messaging under a single, defensible interpretation.

Documentation package you should maintain

Clinical rationale and evidence

The 2026 guidance makes clear that FDA expects developers of CDS software, regulated or not, to maintain a coherent clinical rationale explaining why the tool’s outputs are appropriate for their intended use. This does not mean conducting large clinical trials for low-risk CDS, but it does require traceability: a documented link between inputs, logic, outputs, and clinical context.

Evidence should be proportionate to risk. For lower-risk CDS, this may include literature references, clinical guidelines, expert input, and internal validation showing that outputs align with accepted practice. For higher-impact functions, FDA signals an expectation of stronger support, such as retrospective validation, performance benchmarking, or limited prospective evaluation. Importantly, the guidance stresses that evidence must support how the tool is actually used, not just how it performs in isolation.

Human factors and usability

Human factors documentation is elevated in the 2026 update from “nice to have” to core risk control. FDA explicitly links usability to safety, noting that confusing interfaces, poorly explained outputs, or alert-heavy designs can drive over-reliance or misuse. Developers are expected to document how intended users understand CDS outputs, including assumptions, uncertainty, and limitations. This may involve formative usability testing, simulated clinical scenarios, or structured feedback from representative users. The goal is not perfection, but demonstrable effort to ensure that clinicians can interpret, contextualize, and appropriately challenge the software’s output.

Post-market monitoring signals

The guidance also reinforces expectations around post-market monitoring, even for CDS under enforcement discretion. FDA encourages developers to define what signals would indicate misuse, drift, or emerging risk. These may include user complaints, unexpected patterns of reliance, performance degradation, or use outside intended scope.

Monitoring does not automatically imply active surveillance for all CDS. However, developers should document how they collect feedback, review incidents, and decide when changes or escalation are warranted. The emphasis is on preparedness: being able to show FDA that risks are monitored and managed over time, not discovered retroactively.

Compliance checklist

Below is a practical, reusable checklist aligned with the FDA’s 2026 CDS guidance. It is intentionally phrased as internal control questions that teams can apply during design reviews, audits, or due diligence.

Scope and classification

  • ☐ Have we clearly defined the intended use (clinical context, user, decision supported)?
  • ☐ Does the software provide patient-specific insights that could influence diagnosis, treatment, or management?
  • ☐ Can a clinician independently review and assess the basis for each output?
  • ☐ Is our CDS positioning consistent across product, labeling, website, demos, and sales materials?

Risk and functionality

  • ☐ Does the software recommend, prioritize, or automate actions rather than present information?
  • ☐ Could typical users reasonably over-rely on outputs in real-world workflow?
  • ☐ Have we documented why enforcement discretion applies (if claimed)?

Labeling and claims

  • ☐ Do we avoid prescriptive language (“recommend,” “determine,” “ensure”)?
  • ☐ Are limitations, uncertainty, and user responsibilities clearly stated?
  • ☐ Are screenshots and examples representative of real use?

Documentation

  • ☐ Clinical rationale and evidence proportional to risk
  • ☐ Usability / human factors documentation showing user comprehension
  • ☐ Change management and version control records

Post-market

  • ☐ Defined signals for misuse, drift, or emerging risk
  • ☐ Feedback and complaint handling process documented
  • ☐ Internal review cadence established

This checklist is not a substitute for legal review, but it reflects the minimum documentation discipline FDA now expects in practice.

FAQs

Does my GenAI assistant count as CDS?

Generative AI does not receive special treatment under the 2026 guidance. The FDA is explicit that model type is irrelevant to classification. A GenAI assistant may be CDS or regulated medical device software, depending entirely on its intended use and functional role in clinical decision-making.

If a GenAI tool summarizes records, drafts notes, or retrieves guidelines without shaping clinical judgment, it is more likely to remain outside device scope. However, if it generates patient-specific assessments, risk interpretations, triage suggestions, or care options intended to influence clinician decisions, it likely falls within CDS. The same independent review standard applies: clinicians must be able to understand and assess the basis for the output. Prompt-driven flexibility does not exempt a product from regulation if its practical use steers decisions.

What about patient-facing tools?

Patient-facing tools receive heightened scrutiny because FDA assumes a higher risk of misunderstanding and over-reliance. Software that provides patients with generalized educational information is typically outside CDS scope. However, tools that generate patient-specific recommendations, risk estimates, or care guidance, especially if intended for clinician review, may still qualify as CDS or even regulated device software.

The guidance makes clear that disclaimers alone are insufficient. If outputs are likely to be interpreted as medical advice, or if clinicians are expected to act on them without independent assessment, regulatory expectations increase.

References

  1. U.S. Food and Drug Administration. (2022, updated 2024). Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software
  2. U.S. Food and Drug Administration. (2023). Digital Health Policy Navigator. https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-policy-navigator
  3. Medmarc Insurance Company. (2025, January 8). FDA Stands Its Ground on CDS Guidance. https://medmarc.com/life-sciences-news-and-resources/blog/fda-stands-its-ground-on-cds-guidance

Category: