Search This Blog

Monday, September 27, 2010

Don't Validate that Software!


BACKGROUND

General software validation became an official part of U.S. regulated Life Sciences in 1996 with the finalization of the Quality System Regulation. Contained within that Rule is 21 CFR 820.70(i), which states when computers or automated data processing systems are used as part of production or the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol. It is important to note that while Part 820 is a Medical Device regulation, another section [21 CFR 820.1(b)] gives FDA the authority to apply this requirement to other companies as well.

Prior to this, there was no specific regulation dealing with software validation; the closest was 21 CFR 211.68, which since 1964 has stated that if a manufacturer uses an automated system to produce a drug product, it shall be routinely calibrated, inspected, or checked according to a written program designed to assure proper performance. Bear in mind that computers were practically a novelty when that Rule was finalized, and modern validation practices were only beginning to emerge; the first software inspectional guidance wasn’t issued until 1983.

In 1997, FDA followed up with 21 CFR 11, which states that systems that create, store, process, or transmit electronic records used to satisfy regulations must include validation of systems to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records [21 CFR 11.10(a)].  Shortly after, the Center for Devices and Radiological Health published Guidance for Industry: General Principles of Software Validation, which has since gone through several draft and Final revisions. While that guidance is from FDA’s Medical Device division and does include sections on the validation of software which is, or is used within, a Medical Device, it also has broad applicability to other Life Science firms as well.

The broad applicability of 21 CFR 820.70(i), the virtually universal applicability of Part 11, and the adoption of the standards in FDA’s software validation guidance make it clear that FDA has very high demands for software validation at Life Science firms, and indeed consulting firms like QPharma rely upon FDA’s comprehensive expectations to stay in business. It must be obvious to the reader that the problem that we see most often is too little software validation, too poor software validation, and too poorly executed software validation. Further complicating this is the latest move by CDRH to vastly increase the scope of what FDA considers to be "Medical Device Software" into areas that traditionally have been considered outside CDRH’s (and even FDA’s) jurisdiction.

THE USUAL...AND THE UNUSUAL

When we do audits, our comments are usually: you need to validate this...you forgot to validate that...you didn’t consider this. But once in a while, our comment is: why did you validate that?

Let’s hit some of the basics.  First, you need a Traceability Matrix (TMX), which shows all of your requirements, all of your testing planned (and performed), and linking them together. Traceability of requirements to testing is always a good idea, but for software validation, it’s mandatory: the FDA guidance mentions it repeatedly.  The most important purpose of a TMX is to ensure that every requirement actually gets validated or otherwise verified. But it also goes the other way: if you are testing something and it is not traceable back to a requirement, then why are you testing it?

Another low-hanging fruit is risk assessment. If you take the validation requirements of 21 CFR 11.10(a) and 820.70(i) literally, you’ll probably never manage to make any product for sale: all you will ever do is validate, validate, validate. (As an example, if you literally accept 820.70(i)’s obligation to validate all software changes prior to implementation, can you imagine the potential implications for daily antivirus definitions and Microsoft Windows patches?) In both its software validation guidance and the 2003 Guidance for Industry: Part 11, Electronic Records; Electronic Signatures – Scope and Application, FDA has been putting increasing emphasis on doing risk assessments of systems to determine exactly what truly needs to be validated. In particular, the Part 11 guidance offers companies the possibility of exempting broad swaths of functionality from validation provided that you can back that up with documented risk assessments. FDA may object to lesser validation based on a risk assessment they disagree with, but they will object to lesser validation without a written risk assessment at all.

In both of these examples—validation of functions that aren’t required, and validation of functions that a risk assessment could have exempted—we would caution our client that they are going overboard and that their validation dollars are better spent elsewhere. But how about software that runs a manufacturing process, and that process (and therefore that software) really does have a significant impact upon product quality, and even public health?

More on this topic and the answer this Wednesday, September 29th!

10 comments:

  1. I agree that it is rare to find a situation where a firm validated something that didn't need to be validated. I agree that the TMX can help in determining if a particular requirement or function must be validated. These are especially true in today risk based approach to validation.

    With all this said, I have seen systems where the cost to figure out what is necessary to validate is more costly than simply validating the system. If the culture of the IT organization is a software engineered, life cycle driven, then TMX and documenting/testing is simply part of getting the job done.

    This can also be seen from the view that validation is the documented evidence of good software development practices.

    ReplyDelete
  2. I would like to assert that a trace matrix deliverable is not required. However, trace-ability of requirements to specifications / design and those to objective verification test evidence is required, There are many ways to assure trace-ability, a matrix is one of them. Other solutions may include automation such as a relational database. Often these tools are also useful in management of requirements volatility.

    Requirements often evolve and become more refined during the application build, integration, and even as V&V activity proceeds. In those cases, the testing used to prove the requirement may also require rework. A common failure mode associated with requirements volatility is poorly worded requirements that are ambiguous or cannot be quantified. Appropriately worded requirements should be framed for use as acceptance criteria and referenced by the test script. I believe that often the root cause of requirements volatility is in not having established the required attributes of the requirements. Given the ability to trace or relate testing to the requirements, and their associated specifications or designs can be achieved by several different ways, without it, one can never know when the testing is done.

    ReplyDelete
  3. (font sarcastic ) We all learned our freshman year. You can make up for the fact that you spilled your test sample on the workbench by making the lab report thick enough. Douglas Adams says the superficial problems totally obscure the fundamental ones. Finally why does the drunk look for he keys by the light post instead of where he may have lost them in the dark alley? Because the light is better. (/font sarcastic )

    In a perfect world. I would agree. When you charge $10K for a validation binder its hard to justify just five pages. Even if those pages hit every Critical to Quality requirement and lock up beyond question that the requirement has been met.

    In a less perfect world. Lots of data on what is easy to measure hides the fact that the proof you need is hard to measure.

    ReplyDelete
  4. Yes you need traceability, but you do not NEED a traceability matrix. There are other ways to manage traceability (see on-going discussion).

    ReplyDelete
  5. David, it is true that GPOSV never explicitly uses the term "matrix," and indeed there can be many ways that this expectation can be met without a standalone TMX document. But I do want to make sure that my point is not lost, which is: in addition to using your traceability scheme, whatever it may be, to identify requirements that were missed in testing or other verification (an exercise that is certainly important, but which likely creates MORE work for you), one should not miss out on using this methodology to also identify testing that is unnecessary (which likely will result in LESS work). FDA insists that you do this anyway - why not also use it to reduce the amount of work you're doing where possible?

    ReplyDelete
  6. I find that the TMX prevents me from overlooking something

    ReplyDelete
  7. I'm not saying that a requirenents traceability matrix isn't useful, just that there are other ways of acheiving requirements traceability

    ReplyDelete
  8. I would like to assert that a trace matrix deliverable is not required. However, trace-ability of requirements to specifications / design and those to objective verification test evidence is required, There are many ways to assure trace-ability, a matrix is one of them. Other solutions may include automation such as a relational database. Often these tools are also useful in management of requirements volatility.

    Requirements often evolve and become more refined during the application build, integration, and even as V&V activity proceeds. In those cases, the testing used to prove the requirement may also require rework. A common failure mode associated with requirements volatility is poorly worded requirements that are ambiguous or cannot be quantified. Appropriately worded requirements should be framed for use as acceptance criteria and referenced by the test script. I believe that often the root cause of requirements volatility is in not having established the required attributes of the requirements. Given the ability to trace or relate testing to the requirements, and their associated specifications or designs can be achieved by several different ways, without it, one can never know when the testing is done.

    ReplyDelete
  9. Steve is absolutely right that an RTM is only one way of acheiving traceability (to be covered in a forthcoming webcast - see http://www.businessdecision-lifesciences.com/TPL_CODE/TPL_AGENDA/PAR_TPL_IDENTIFIANT/273/1584-agenda.htm if you're interested).

    The referenced blog could have usefully mentioned the value of what GAMP defines as the initial risk assessment, to determine whether or not the software/system needs validating, the scope of any validation and the criticality of the software/system, which helps in the validation planning.

    In my experience it is relatively rate to validate software that does not require validation (the reverse is much more common), but with risk-based validation there should be little additional overhead even if this does happen.

    ReplyDelete
  10. The term "matrix" is probably historical. As for software development in the healthcare industry it probably began to be used at the time where other mechanisms perhaps already existed but may have been regarded as project overkills or basically not good value for the money the companies had to spent on, so the cheapest implementation of a mechanism to trace traceability among elements was thought as a "matrix" defined in a spreadsheet format. The need to have the traceability process and its implications (maintenance of traceability, chiefly) then promoted the use of database driven artifacts which naturally promote these processes to run more smoothly over the lifecycle of a system.

    Traceability serves in general for two purposes. As a project tool, to quickly determine completeness of a given development and to being able to determine coverage, all the way through from requirements to testing. This is in line to good engineering practices and may be a factor to determine to even an outsourced developer should get paid for their work!
    Also, not only during the project phase but at operational level, traceability allows easier identification of affected modules and their specifications (perhaps even at source code level in some degree, depending on the implementation of traceability at a particular project or company) thus effectively allowing easier resolution of defects or determining the extent of the impact of a subsequent modification to the existing software, so it becomes an important factor in change management.

    The other important purpose is to being able to demonstrate to an inspector that effectively the project and/or the company adheres to these good engineering practices. The inspectors may want you to demonstrate how certain requirement which ended up on being implemented in the system has actually been implemented. One of the steps to demonstrate that, at very basic level, is to demonstrate to them that somehow you've got a mechanism to identify the relationships between the original user requirement all the way through to user acceptance testing and most of the times it may even be impractical or unnecessary to take the inspector to access your system traceability records but rather showing them a excerpt or a report. These reports, although for that purpose not necessarily need to be reporting traceability on the entire system, typically adopt the form of a matrix, hence another common usage of the term.

    Important in any case is what format of the process and tools are used and how these are defined in the respective project or company IT software traceability procedure.

    ReplyDelete