Manual visual inspection is the most common method for performing 100% visual inspection of parenteral liquids and remains a critical procedure that all manufacturers must continue to perform.
Automated visual inspection, with its higher throughput and lower running costs, provides an effective alternative. Automated visual inspection has improved significantly over the last decade with digitalization and data processing, offering manufacturers a consistent process not subject to fatigue or mood.
While manual visual inspection may appear old-fashioned compared to its automated alternative, two constraints demand that manufacturers develop and maintain a high level of expertise in manual visual inspection, even if the volume of manually inspected batches is marginal.
First, according to USP <1790> Visual Inspection of Injections, any alternative to manual visual inspection must be demonstrated to have “equivalent or better performance when compared to manual visual inspection” (1). The Knapp and Kushner framework provides a recognized methodology to ensure this equivalency; this approach compares the probabilities of detection of defects found by manual visual inspection to the performance of detection by the alternative (2). An organization with an effective manual visual inspection process has a more robust baseline which, in turn, drives automated visual inspection qualification to the necessary level of performance. Ultimately, this means that automated visual inspection qualification is not based on absolute criteria, but relative criteria.
Second, after 100% visual inspection, a sampling of accepted units of each batch is inspected. This ensures that the remaining level of defects is statistically acceptable. Current compendia and regulations require this inspection to be done manually.
Manual visual inspection brackets visual inspection from the beginning (qualification) to the end (acceptance quality limit, or AQL) of the process, regardless of the complexity and sophistication of automated visual inspection.
As a critical procedure, how can adequate manual visual inspection performance be assured?
A human-based process, the qualification of inspectors is a pillar for its success. Typically, inspectors are qualified using a set of defects among conforming units; the inspector must properly detect defects to receive an “Inspection license.”
But what does “proper detection” mean? And what criteria apply?
Is 100% Detection Truly Beneficial?
USP <1790> recommendations are based on the Knapp & Kushner methodology. The first step is calculating the criteria for success for an inspector’s qualification. To do this, a group of reference inspectors perform repeated manual inspections. As outlined in Section 7.4, their results are then statistically combined to determine the probability of detection baseline, which is the success criteria for future inspectors. Then, distinct criteria should be defined for each defect class (i.e., criticality).
Following this method provides consistency to the qualification of visual inspection processes: a new inspector who passes the qualification can be considered capable as compared to the probability of detection baseline. A common correlation, however, could be assumed between “criticality of defect” and “performance of detection.” Intuitively, the most critical defects could be expected to be the most detected, a correlation that is unfortunately misguided. Defect criticality is based on patient risk, not on visual attributes that relate to the ease of detection. Conversely, the visual attribute of a defect has no impact on the risk to the patient. A critical defect could be difficult to detect after 10 seconds (or more) of inspection on a black-and-white background, while a minor defect with cosmetic impact could be detected within 1 second of inspection.
In the parenterals industry, it is common to find that inspectors are qualified based on arbitrary performance criteria, often around criticality. The expectation for product to be inspected 100% for critical defects during qualification leads to lowered expectations with decreasing criticality (e.g., 80% for major defects). As critical defects are those that potentially lead to patient safety issues, the value of “100% critical defects” may reassure management as well as regulators about product quality. While this is certainly an ideal figure, this approach may lead to bias and side effects, pushing back the intrinsic nature of visual inspection as a probabilistic process. As a result, three situations could potentially put patients at risk, as outlined in Table 1.
Table 1 Three Situations Where 100% Detection Leads to Greater Risks
||…there is a risk of…
||Critical defects remaining probabilistically detected
||Defects with potential impact on patient safety are not classified as critical, as the detection method cannot ensure 100% detection
||Bias on defect classification
||Obvious critical defects qualified that are not representative of all critical defects
||Qualification status being relevant for only some critical defects
||Unknown performance of detection for a variety of critical defects
||Overconfidence and misbelief on the purpose of the visual inspection process, with 100% detection routinely expected
||Consideration of 100% manual visual inspection acting as “filters” or “safety nets”
||Lack of embedded quality into the product, regarding critical defects, prior to visual inspection
The first risk is focusing on obvious critical defects while neglecting those that could legitimately impact patients—the risk of bias on defect classification. For example, a defect with a low probability score of detection may not be identified as critical though, in fact, it impacts patient safety.
The second risk is qualifying obvious critical defects that are not representative of all critical defects seen during routine visual inspection. Such a situation may lead to a qualification status that is only relevant for some critical defects (i.e., the most obvious with 100% probability of detection), but not to all critical defects (i.e., the obvious defects with 100% probability of detection plus those with true probabilistic detection, approximately 70-80% probability of detection). Defects in test sets should be representative of the entire range naturally present in production, according to Section 7.1 of USP <1790>. As the variety of critical defects seen during routine production also encompasses defects with lower detectability, inspectors should be qualified using these defects to ensure they can adequately detect them.
The third risk is overconfidence in 100% manual visual inspection as a capable process to remove all critical defects. In the case of AQL failure or customer complaint involving a critical defect, this may lead to the belief that 100% manual visual inspection has failed to efficiently remove these defects. In addition, such a situation may incorrectly address manufacturing incidents where, whatever the process design preventing (or not) the generation of critical defects, 100% manual visual inspection will be considered to act as a safety net to remove them, no matter the pollution level of the batch prior to visual inspection. This approach could lead to the idea that quality is not “embedded into the product,” but needs a filter, such as 100% manual visual inspection, to build quality.
Paradoxically, the three situations identified in Table 1 show that putting the criterion of 100% on critical defects may introduce systemic risks related to the process performance of manual visual inspection. Such an approach does not ensure that an efficient manual visual inspection is in place.
Inspection qualification does not intend to assess the perfect detection of a category of defects, even critical defects. Qualification determines if the inspector has the proper gestures, concentration and pacing, the combination of which allows the inspector to consistently detect defects, whatever their classification and the risk they represent to the end user. Qualification of an inspector is only one part of the qualification program; proper training, with adequate illustrated procedures, dry runs, coaching and mentoring are also key to properly qualifying inspectors.
Possible bias on defect classification, unknown performance of detection for the whole variety of critical defects and possible lack of quality embedded into the product prior to visual inspection indicates the 100% detection criteria on critical defects is not likely as ideal as common sense expects.
Even though the detection rate should be as high as possible, the goal of ensuring patient safety does not equal 100% detection of critical defects, neither during routine processing nor operator qualification. It implies something quite different—zero critical defects in the finished product. Visual inspection, with its intrinsic limitations, even on critical defects, is only one piece of a holistic process that requires a robust control strategy of prevention and management of defects, from incoming material to fill-finish steps.
- U.S. Pharmacopeial Convention. Chapter <1790> Visual Inspection of Injections . In USP 42-NF 37, USP: Rockville, Md., 2019.
- Knapp, J., and Kushner, H. Generalized Methodology for Evaluation of Parenteral Inspection Procedures. PDA J Pharm Sci Technol 23 (1980), pp. 14-61.