AI in Regulated Manufacturing — Practical Guidance for Annex 22 Compliance

ai · annex-22 · gxp · regulated-manufacturingMarch 26, 2026

EU GMP Annex 22 introduces a structured compliance framework for AI and machine learning systems used in regulated manufacturing. Understanding which AI systems fall within its scope, and what compliance requires for those systems, is the practical challenge facing regulated manufacturers now — before mandatory adoption.

This article is focused on the practical steps: scoping, validation, human oversight design, and monitoring. It assumes the reader is familiar with GxP principles and is responsible for determining how Annex 22 applies to their organisation's technology environment.

The threshold question: GxP-relevant AI decisions

Not all AI use in a regulated manufacturer is subject to Annex 22. The threshold is whether an AI system produces outputs that influence GxP records or GxP decisions. An AI system that recommends reorder quantities for packaging materials — where the final decision is made by a planner and no GxP record is created — is below the threshold. An AI system that classifies product images as pass or fail for a quality inspection — where the classification influences a GxP record — is above it.

The distinction between "AI-assisted" and "AI-influenced" decisions is where most organisations will draw the scope boundary. AI-assisted means the AI produces a recommendation that a human reviews and decides on. AI-influenced means the AI output directly creates or changes a GxP record, even if a human nominally approves it. Annex 22 applies to both categories where the AI output affects GxP outcomes, but the validation and oversight requirements scale with the risk level.

Conducting an Annex 22 scoping assessment

The scoping assessment answers three questions for each AI system in use or planned: Does this system's output influence GxP records or decisions? What is the risk level of those GxP decisions? What validation and oversight measures are currently in place?

For a Business Central environment, the relevant AI systems to assess include: Microsoft Copilot features enabled in the BC tenant; AI-connected integrations (predictive maintenance platforms, vision inspection systems, process analytical technology); and AI functionality in connected systems (QMS, LIMS) that feeds data into BC.

The scoping assessment should be documented and retained. It demonstrates to an inspector that the organisation has actively identified and assessed its AI obligations — not simply claimed unawareness.

Validation approach for AI models in regulated processes

AI model validation differs from traditional CSV in one key respect: the model's behaviour is not fully determinable from its specification alone. Statistical performance metrics must supplement the functional validation evidence.

The validation approach for an AI model used in a regulated process should include: definition of the intended use and performance criteria before validation begins; assessment of training data quality, representativeness, and documentation; IQ evidence for the model deployment environment; OQ evidence demonstrating that the model meets its performance criteria on a representative test dataset; PQ evidence from actual operating conditions, including edge cases and boundary conditions; and a documented drift monitoring procedure.

For AI features embedded in commercial platforms like BC Copilot, the vendor's model card and responsible AI documentation substitutes for some of this evidence — equivalent to how Microsoft's vendor qualification documentation substitutes for the FS in a standard BC validation. The organisation must still assess whether the vendor-provided evidence is sufficient for the intended use.

Designing human oversight workflows

Annex 22 explicitly requires human oversight of AI outputs in GxP-relevant processes. Designing this oversight correctly is not simply a matter of adding an approval step. The oversight must be meaningful: the reviewing person must have access to the information needed to evaluate the AI output, must understand the limitations of the AI system, and must have the authority to override the AI output without barrier.

In a Business Central environment, human oversight for an AI-influenced GxP decision typically means: the AI output is presented to the reviewer alongside the source data; the reviewer can accept, modify, or reject the AI output; the review action is recorded in the Change Log or approval workflow with user identity and timestamp; and the system does not proceed to the GxP record without a confirmed human decision.

Designing this workflow requires involvement from both the QA function (to define the oversight requirements) and the system configurator (to implement the workflow in BC or the integrated system).

Monitoring AI model performance in production

Annex 22 requires ongoing monitoring of AI model performance in production environments. This means defining performance metrics before deployment, collecting performance data continuously or periodically after deployment, investigating anomalies and performance degradation, and triggering revalidation or model replacement when performance falls below defined thresholds.

Model drift — gradual degradation of model performance as the production environment changes — must be planned for. Manufacturing process changes, raw material variability, and equipment changes can all shift the distribution of data the model encounters, reducing accuracy over time. A monitoring procedure that detects and responds to drift is part of the validated state of an AI system subject to Annex 22.

AI platforms connected to Business Central

Several AI platform categories connect to BC in regulated manufacturing environments: predictive maintenance systems that ingest sensor data and feed maintenance recommendations to BC work orders; process optimisation platforms that analyse production data and propose parameter adjustments; demand forecasting tools that use BC sales history to predict future demand. The Annex 22 assessment must consider not just the AI platform itself, but the interface between the platform and BC — specifically, whether the interface creates GxP-relevant records in BC automatically or requires human decision before records are created.

Download the AI in regulated manufacturing and Annex 22 compliance guidance as a PDF using the link below.