NOTE: This discussion is a continuation of Monday's post by Jeff Boatman - CQA and Sr. Subject Matter Expert, QPharma. To read the 1st part of this post, please click here.
INTEGRAL SOFTWARE
Suppose I have three autoclaves. They’re used to sterilize final product for human implantation: major potential for serious health consequences (and remember, you don’t get to say “it doesn’t matter because we wind up testing our product”—it matters). All three of these autoclaves use software.
The first one has a separate PC connected to it, and the software resides on that computer’s hard drive, interfacing with the autoclave through a serial port. The PC is a general-purpose computer that can interface with a number of different autoclave models, and allows the user to set up specialized cycle profiles and alarms. Does that software need to be validated? You bet!
The second one sounds different, but in most ways, it’s not. That autoclave has a computer built right into it, running software with a user interface that permits customized cycles and alarms. Now there is indeed a difference: it is not designed to be disconnected from one autoclave and hooked up to another. Clearly there is some amount of scale-down that is appropriate in not needing to check that the computer can be reliably moved from one autoclave to another. But the core expectations remain the same: not only does the software, as it is configured right now, run the autoclave in such a way that it consistently works (OQ = correct temperature/pressure/cycle profiles, PQ = actually sterilizing loads); it also must demonstrably provide confidence that a range of different parameters and program sequencing will be reliably translated into actual machine operation and that those profiles are consistently implemented. That’s a qualification that goes beyond simply how the machine itself operates: it is software validation. We may have used a risk assessment to scale down the amount of testing needed, but there is definitely a separate software aspect to this.
Now consider the third example. This autoclave is computerized, but the software is programmed for one specific cycle profile; it is not customizable. There are a couple of ways this could be done: the autoclave could truly be a one-trick pony, with the computer no more customizable than a four-function calculator; or more and more commonly, the software is customizable, but the manufacturer has decided that the configuration will be “locked down” and never changed, at least not outside formal change control (which includes, of course, a process for evaluating changes for the need to perform re-validation).
I contend that in this last example, the whole concept of “software validation” goes out the window. If indeed the system has any potential for changing settings and parameters, then certainly you must both explain the measures used to preclude anyone from making unauthorized and unvalidated changes, and then demonstrate the effectiveness of those controls; and if the autoclave electronically captures or transmits any digital records that are used for GxP purposes, then validation for Part 11 may still apply. Those could include actual cycle monitoring records, any data input by users such as batch identification, or confirming the identity of authorized users.
But provided you have covered these Part 11 items, there is no regulatory obligation to...
...perform a specific “software” validation. You should be able to trace whatever software that is in the unit, including any customizations prior to its "lockdown," back to a backup or code listing somewhere (to prove that you would be able to get the machine back to its validated state and to satisfy the “written copy of the program…” requirement in 21 CFR 211.68(b)), but if the software cannot be changed and you show that the autoclave meets its requirements and works as expected, then what exactly is the advantage of having a separate validation plan, OQ, and report just for its “software” aspects? You are going to have to validate that autoclave for its proper function anyway [21 CFR 211.113(b), 820.75(a), and a host of FDA and industry standards]; why not simply treat the software functions as part of the hardware so if you validate one, you’ve validated both?
...perform a specific “software” validation. You should be able to trace whatever software that is in the unit, including any customizations prior to its "lockdown," back to a backup or code listing somewhere (to prove that you would be able to get the machine back to its validated state and to satisfy the “written copy of the program…” requirement in 21 CFR 211.68(b)), but if the software cannot be changed and you show that the autoclave meets its requirements and works as expected, then what exactly is the advantage of having a separate validation plan, OQ, and report just for its “software” aspects? You are going to have to validate that autoclave for its proper function anyway [21 CFR 211.113(b), 820.75(a), and a host of FDA and industry standards]; why not simply treat the software functions as part of the hardware so if you validate one, you’ve validated both?
Admittedly, an autoclave is perhaps not the best example; autoclaves are becoming increasingly sophisticated precisely because manufacturers want flexibility in customizing cycles to circumstances, the old “15 minutes at 121” becoming antiquated. It probably also does not apply to most CNC-driven production equipment for the same reason, but I have definitely seen CNC mills that have become “one-trick ponies” where my argument could also apply.
Where I do often see this is in machines that are manufactured by a tool maker to be adapted to a variety of industries, and which the drug or device firm then buys with a particular set of ladder logic in its Programmable Logic Controller (PLC). Consider a heat-sealer used to seal foil pouches; there are two types of pouches used, and they each require a particular temperature and a specific belt speed. The manufacturer allows a multitude of settings, but you have programmed it only to do those two, and you will never change them (perhaps the temperature is set by a PID so human intervention to maintain temperature is unnecessary, in fact that’s one of the requirements being validated). Assuming that you can show that these settings really cannot be changed, either because you have objectively validated the effectiveness of the controls around them or because there simply isn’t any interface that would allow them to be changed, then what is the point of performing a software validation?
Key to this argument is the assumption that the software genuinely will perform consistently. If I type “clear 5 x 5 =” on my calculator, I should always get the same answer. The calculator may have a serious software bug, but if that one calculation works once, it should work again. But if I now type “clear 5 x 6 =,” that software bug may suddenly impact my results. So suppose my heat sealer software allows me to input batch number, user name, and product ID into fields to be stored. Could different inputs result in changing the ladder logic? Probably not, but I have seen a case where typing into a field overran the software’s table entry and overwrote the next instruction, causing bizarre, non-repeatable behavior. If your equipment allows variable inputs and you cannot document why you are confident that they could not affect program execution, then software validation is probably in the cards for you. (This could in principle apply to data outputs as well.) As the complexity of the software and in particular the amount of input the operator has increases, there will come a tipping point at which software validation is needed.
Yes, of course the temperature of a heat sealer needs to be consistently met whether you are processing one pouch or a hundred. And sure, if the software allows the user to raise or lower the temperature by five degrees to compensate for on-the-line peel test results, you need to exercise the program at its limits. But all of that is true regardless of whether the machine is computerized, analog, or “other”: I once worked on a manufacturing machine whose PLC was executed as a series of air valves acting as logic elements. It could, in fact, be “re-programmed” by changing the order of which valves were connected to which tubes. Was that “software”? Sure, by the strict definition. Was it an “electronic record”? Absolutely not, and it would have been crazy to have written a separate “software validation” protocol. We took a photograph of its internal workings, attached that to a drawing showing the correct configuration, attached that to the IQ, buttoned the unit up, and stuck a security seal over it and gave the maintenance department the extras. We then validated it based on what it was intended to do, and not on what it could theoretically do.
SUMMARY
In General Principles of Software Validation, FDA proclaims “software is not hardware.” In case that isn’t self-evident, the guidance goes on to explain why it is true: software evolves and improves over time, software can act in nonobvious ways, software’s sequence of execution can change. I say, “yes, but sometimes software can be treated like hardware during validation,” with the proviso: “if you’re careful.”
Validating computerized manufacturing equipment certainly requires consideration of the software aspects of its operation. If that equipment utilizes or generates GxP electronic records, then the 21 CFR 11 implications must be evaluated and validated. If that software has general-use capabilities or interfaces with other systems such that one can control the other, either directly or through the relay of information, then software validation is an important part of the overall validation. Certainly the default position from a regulatory standpoint is to perform software validation.
But if equipment follows software instructions that are fixed, or at least have a very limited ability to change parameters and no ability to change logical sequencing (including as an unexpected result of inputting or outputting data), then so long as it can be demonstrated that the ability to make changes to that system are tightly controlled (or impossible), then incorporating the validation of software functionality into the conventional qualification of the equipment itself may be a viable, cost-saving, and compliant option worth considering. Since risk assessments are already an FDA expectation when it comes to validation anyway, spending a few extra minutes evaluating whether the software in a piece of equipment actually needs its own validation effort and documenting that finding and its reasoning could be time well spent indeed.
Please leave your comments on the blog. Do you agree? Or disagree?
No comments:
Post a Comment