FDA in 2025: AI-SaMD Lifecycle and New Cybersecurity Expectations — What to Change in the Roadmap

AI-SaMD in 2025: Why the FDA Shifted from Approval to Lifecycle Control

In 2025, the regulatory conversation around AI-enabled medical software changed in a fundamental way. The U.S. Food and Drug Administration no longer treats artificial intelligence primarily as a novel technology requiring episodic review. Instead, AI-SaMD is now regulated as a continuously evolving system, whose behavior over time is as important as its performance at the moment of submission. This shift reflects a practical reality: unlike traditional medical devices or static software, AI systems are designed to learn, update, and adapt. Earlier regulatory models built around fixed specifications and infrequent changes proved insufficient for tools that rely on data pipelines, retraining cycles, and ongoing optimization. As a result, the FDA’s focus moved decisively toward total product lifecycle (TPLC) oversight.

What changed in 2025 is not simply the volume of guidance, but its emphasis. FDA communications increasingly address how developers plan for change: how models will be updated, how performance drift will be detected, and how risks will be controlled after deployment. This marks a departure from approval-centric thinking. Initial clearance is no longer the endpoint; it is the starting point of regulatory accountability.

Another driver of this shift is convergence. AI governance and cybersecurity oversight are now tightly linked. Connected devices, cloud-based models, and remote update mechanisms mean that safety, performance, and security cannot be separated into different silos. From the FDA’s perspective, an AI system that cannot be governed over time represents a systemic risk, regardless of its baseline accuracy.

For additional insights, refer to Cybersecurity Incidents in Connected Health: Lessons from 2025 for Patient Monitor Vendors and Ecosystems.

For planning purposes, the timing matters. Draft and final guidances released toward the end of 2025 are effectively signals to product and regulatory teams preparing 2026 roadmaps. They indicate that lifecycle planning, documentation, and post-market controls must be treated as core design elements, and not as compliance work deferred until after launch.

The strategic implication is clear: in 2026, organizations that frame AI-SaMD development around static submissions will struggle to scale. Those that embed lifecycle governance into their architecture will align more naturally with the FDA’s evolving regulatory model.

FDA Draft Guidance on AI-Enabled Device Software Functions: What Product Teams Must Redesign

The FDA’s 2025 draft guidance on AI-enabled device software functions makes one point unmistakably clear: AI-SaMD is no longer evaluated as a static product, but as a managed process. For product, data science, and regulatory teams, this reframes how roadmaps, architectures, and submissions must be designed.

At the center of the guidance is the Total Product Lifecycle (TPLC) concept. Rather than focusing narrowly on premarket performance, the FDA expects developers to demonstrate how an AI system will be controlled, monitored, and updated over time. This includes not only what the model does today, but how it may change tomorrow and how those changes remain safe and effective. A key mechanism in this framework is the Predetermined Change Control Plan (PCCP). The FDA signals that certain types of post-market modifications, such as retraining with new data or adjusting model parameters, may be permissible without a new submission if they are anticipated, documented, and governed upfront. This shifts substantial responsibility to development teams to define boundaries in advance: what kinds of changes are expected, how they will be validated, and when regulatory re-engagement is required.

For product teams, this has architectural consequences. Model update strategies, data pipelines, and validation workflows become regulatory artifacts, not just internal engineering choices. Undocumented retraining or ad hoc updates introduce compliance risk, even if clinical performance improves. Conversely, conservative but transparent governance can enable faster iteration within approved limits.

The guidance also sharpens distinctions between different AI implementations. Locked models, i.e., those that do not change post-deployment, remain simpler to regulate but offer limited adaptability. Adaptive models, while more powerful, face higher expectations around monitoring, performance drift detection, and rollback procedures. Similarly, AI embedded within broader systems inherits the regulatory risk profile of the entire device, not just the algorithmic component. Another notable signal is the FDA’s emphasis on clarity over complexity. Developers are not rewarded for opaque sophistication. Instead, the agency prioritizes traceability, documentation, and explainable governance structures. Product teams must be able to explain not only how an AI model works, but how decisions about updates, data inclusion, and performance thresholds are made and enforced.

Taken together, the draft guidance implies a redesign of internal workflows. AI roadmaps must now align with regulatory logic; model management plans must exist alongside feature backlogs. For 2026, organizations that treat lifecycle planning as a first-class product requirement will be better positioned to navigate both FDA review and downstream partnerships.

FDA Final Guidance on Cybersecurity: “Cyber Devices” and Quality System Integration

While AI lifecycle governance attracted much of the attention in 2025, the FDA’s final cybersecurity guidance arguably has the most immediate operational impact. The key change is conceptual: cybersecurity is no longer treated as a discrete technical feature, but as an intrinsic property of medical device quality.

The guidance formalizes the FDA’s focus on so-called “cyber devices” – medical devices that rely on software, connectivity, remote updates, or data exchange. By definition, most AI-enabled and connected products fall into this category. As a result, cybersecurity expectations now extend across the entire device lifecycle, from design and development through post-market maintenance.

Crucially, cybersecurity requirements are now embedded into quality system processes and premarket submission content. This means developers must demonstrate not only that risks have been identified, but that there are structured processes to manage vulnerabilities over time. One-time penetration tests or generic security statements are no longer sufficient. The FDA expects evidence of secure-by-design principles, documented threat models, and defined procedures for vulnerability intake, assessment, and remediation.

AI-SaMD products face heightened scrutiny because of their architectural characteristics. Continuous connectivity, cloud-based components, third-party dependencies, and update mechanisms all expand the attack surface. From a regulatory perspective, an unsecured update pathway or poorly governed dependency can compromise safety just as directly as a clinical performance failure. One more significant signal is the FDA’s emphasis on post-market responsibility. Cybersecurity does not end at clearance. Manufacturers are expected to monitor emerging threats, respond to newly discovered vulnerabilities, and communicate appropriately with users and regulators. This reinforces a broader regulatory trend: safety and security are ongoing obligations, not point-in-time achievements.

For 2026 planning, the implication is clear. Cybersecurity work can no longer be siloed within IT or engineering teams. It must be integrated into quality management, regulatory strategy, and product governance, or risk becoming a bottleneck to approval, deployment, and scale.

Practice Focus: What Artifacts to Prepare and What to Change in the 2026 Roadmap

The FDA’s 2025 guidances translate into a practical reality for teams: regulatory expectations are now assessed through documentation, processes, and ownership, not intentions. For 2026, the most effective response is to align internal artifacts with how the FDA evaluates AI-SaMD and cyber devices across their lifecycles.

One foundational deliverable is threat modeling documentation. This artifact should define the system’s scope, identify attack surfaces, outline plausible threat scenarios, and document mitigation strategies. Importantly, threat models are expected to evolve as the product changes; static diagrams quickly lose regulatory value once new features, data sources, or update mechanisms are introduced. Equally central is the Software Bill of Materials (SBOM). Beyond listing components, the FDA expects manufacturers to understand their dependencies, track version changes, and define update strategies. SBOMs are not simply inventories; they are tools for managing supply-chain risk and responding to newly disclosed vulnerabilities in third-party software.

For AI-enabled devices, a change management plan has become critical. This includes clearly defined retraining triggers, validation steps following updates, performance acceptance criteria, and rollback procedures if issues arise. When aligned with a Predetermined Change Control Plan, this artifact helps demonstrate that adaptation is controlled rather than ad hoc. Post-market monitoring frameworks are another area of heightened scrutiny. Teams should be prepared to show how they detect performance drift, cybersecurity incidents, and emerging risks in real-world use. This includes intake processes, escalation paths, and decision-making authority: who acts, how quickly, and under what conditions.

Finally, the FDA increasingly looks for clarity around cross-functional ownership. Effective governance requires coordination between product, engineering, quality, security, and regulatory teams. Ambiguous ownership often surfaces as gaps during review.

From a roadmap perspective, the message is straightforward: these activities must be scheduled like core features. Retrofitting governance after development slows timelines and increases risk. In 2026, regulatory readiness will increasingly shape not just approval outcomes, but partnerships, procurement decisions, and long-term scalability.

References

  1. U.S. Food and Drug Administration. (2024). Artificial intelligence software as a medical device (AI-SaMD).
    https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device
  2. U.S. Food and Drug Administration. (2024). Cybersecurity in medical devices.
    https://www.fda.gov/medical-devices/digital-health-center-excellence/cybersecurity
  3. U.S. Food and Drug Administration. (2023). Cybersecurity in medical devices: Quality system considerations and content of premarket submissions.
    https://www.fda.gov/regulatory-information/search-fda-guidance-documents/cybersecurity-medical-devices-quality-system-considerations-and-content-premarket-submissions

Category: