Search This Blog

Showing posts with label verification. Show all posts
Showing posts with label verification. Show all posts

Friday, July 15, 2011

Summarize Verification Results in a Compliance Matrix

9.2.5 Compliance Matrix – The data resulting from the actions summarized in the verification matrix for verifying that the system meets all requirements are collected in a compliance matrix. The compliance matrix shows performance for each requirement. It flows performance from the lowest levels of the system hierarchy up to top levels. It identifies the source of the performance data and shows if the design is meeting all requirements. The bottom up flow of performance provides early indication of non-compliant system performance and facilitates defining mitigation plans if problems are identified during verification actions. An example compliance matrix for the switch module is shown in Figure 9-3.



 Figure 9-3 An example Compliance Matrix for the simple switch function illustrated in Figure 9-1.
Note that the requirements half of the compliance matrix is identical to the requirements half of the verification matrix. The compliance matrix is easily generated by adding new columns to the verification matrix. Results that are non-compliant, such as the switching force, or marginally compliant, such as the on resistance, can be flagged by adding color to one of the value, margin or compliant columns or with notes in the comments column.
In summary, the arrows labeled verification in Figure 6-4 from functional analysis to requirements analysis, from design to functional analysis and from design to requirements analysis relate to the iteration that the systems engineers do to ensure the design is complete and accurate and that all “shall” requirements are verified in system integration and system test. This iteration is necessary so that for each requirement a verification method is identified, any necessary test equipment, test software and data analysis software is defined in time to have validated test equipment, test procedures and test data analysis software ready when needed for system integration and test.
9.3 Systems Engineering Support to Integration, Test and Production
Manufacturing personnel and test personnel may have primary responsibility for integration, test and production however; systems engineers must provide support to these tasks. Problem resolution typically involves both design and systems engineers and perhaps other specialty engineers depending on the problem to be solved. Systems engineers are needed whenever circumstances require changes in parts or processes to ensure system performance isn’t compromised.

Tuesday, July 12, 2011

Planning for System Integration and Testing

9.2.2 System Integration and Test Plan – The purpose of the System Integration and Test Plan (SITP) is to define the step by step process for combining components into assemblies, assemblies into subsystems and subsystems into the system. It is also necessary to define at what level software is integrated and the levels for conducting verification of software and hardware. Because of the intimate relationship of the verification matrix to the system integration and test it is recommended that the SITP be developed before the verification matrix is deemed complete.
The SITP defines the buildup of functionality and the best approach is usually to build from the lowest complexity to higher complexity. Thus, the first steps in integration are the lowest levels of functionality; e.g. backplanes, operating systems and electrical interfaces. Then add increasing functionality such as device drivers, functional interfaces, more complex functions and modes. Finally implement system threads such as major processing paths, error detection paths and end-to-end threads. Integration typically happens in two phases: hardware to hardware and software to hardware. This is because software configured item testing often needs operational hardware to be valid. Two general principles to follow are: test functionality and performance at the lowest level possible and, if it can be avoided, do not integrate any hardware or software whose functionality and performance has not been verified. It isn’t always possible to follow these principles, e.g. sometimes software must be integrated with hardware before either can be meaningfully tested.
One objective of the SITP is to define a plan that avoids as much as possible having to disassemble the system to implement fixes to problems identified in testing. A good approach is to integrate risk mitigation into the SIPT. For example, there is often a vast difference between the impact of an electrical design problem and a mechanical or optical design problem. Some electrical design or fabrication problems discovered in I & T of an engineering model can be corrected with temporary fixes (“green wires”) and I & T can be continued with minimal delay. However, a serious mechanical or optical problem found in the late stages of testing, e.g. in a final system level vibration test, can take months to fix due to the time it takes to redesign and fabricate mechanical or optical parts and conduct the necessary regression testing. Sometimes constructing special test fixtures for early verification of the performance of mechanical, electro-mechanical or optical assemblies is good insurance against discovering design problems in the final stages of I & T.
The integration plan can be described with an integration flow chart or with a table listing the integration steps in order. An integration flow chart graphically illustrates the components that make up each assembly, the assemblies that make up each subsystem etc. Preparing the SITP is an activity that benefits from close cooperation among system engineers, software engineers, test engineers and manufacturing engineers. For example, system engineers typically define the top level integration flow for engineering models using guidelines listed above. Manufacturing engineers typically define the detailed integration flow to be used for manufacturing prototypes and production models. If the system engineers use the same type of documentation for defining the flow for the engineering model that manufacturing engineers use then it is likely that the same documentation can be edited and expanded by manufacturing engineers for their purposes.
It should be expected that problems will be identified during system I & T. Therefore processes for reporting and resolving failures should be part of an organizations standard processes and procedures. System I & T schedules should have contingency for resolving problems. Risk mitigation plans should be part of the SITP and be in place for I & T; such as having adequate supplies of spare parts or even spare subsystems for long lead time and high risk items.
System integration is complete when a defined subset of system level functional tests has been informally run and passed, all failure reports are closed out and all system and design baseline databases have been updated. The final products from system integration include the Test Reports, Failure Reports and the following updated documentation:
  • Rebaselined System Definition
    • Requirements documents and ICDs
    • Test Architecture Definition
    • Test Plans
    • Test Procedures
  • Rebaselined Design Documentation
    • Hardware Design Drawings
    • Fabrication Procedures
    • Formal Release of Software including Build Procedures and a Version Description Document
    • System Description Document
    • TPM Metrics
It is good practice to gate integration closeout with a Test Readiness Review (TRR) to review the hardware/software integration results, ensure the system is ready to enter formal engineering or development model verification testing and that all test procedures are complete and in compliance with test plans. On large systems it is beneficial to hold a TRR for each subsystem or line replaceable unit (LRU) before holding the system level TRR.
9.2.3 Test Architecture Definition and Test Plans and Procedures – The SITP defines the tests that are to be conducted to verify performance at appropriate levels of the system hierarchy. Having defined the tests and test flow it is necessary to define the test equipment and the plans and procedures to be used to conduct the tests. Different organizations may have different names for the documentation defining test equipment and plans. Here the document defining the test fixtures, test equipment and test software is called the Test Architecture Definition. The test architecture definition should include the test requirements traceability database and test system and subsystems specifications.
Test Plans define the approach to be taken in each test; i.e. what tests are to be run, the order of the tests, the hardware and software equipment to be used and the data that is to be collected and analyzed. Test Plans should define the entry criteria to start tests, suspension criteria to be used during tests and accept/reject criteria for test results.
Test Procedures are the detailed step by step documentation to be followed in carrying out the tests and documenting the test results defined in the Test Plans. Other terminologies include a System Test Methodology Plan that describes how the system is to be tested and a System Test Plan that describes what is to be tested. Document terminology is not important; what is important is defining and documenting the verification process rigorously.
Designing, developing and validating the test equipment and test procedures for a complex system is nearly as complex as designing and developing the system and warrants a thorough systems engineering effort. Neglecting to put sufficient emphasis or resources on these tasks can result in delays of readiness of the test equipment or procedures and risks serious problems in testing due to inadequate test equipment or processes. Sound systems engineering practices treat test equipment and test procedure development as deserving the same disciplined effort and modern methods as used for the system under development.
The complexity of system test equipment and system testing drives the need for disciplined system engineering methods and is the reason for developing test related documentation in the layers of SITP, Test Architecture Definition, Test Plans and finally Test Procedures. The lower complexity top level layers are reviewed and validated before developing the more complex lower levels. This approach abstracts detail in the top levels making it feasible to conduct reviews and validate accuracy of work without getting lost in the details of the final documentation.
The principle of avoiding having to redo anything that has been done before also applies to developing the Test Architecture Definition, Test Plans and Test Procedures. This means designing the system to be able to be tested using existing test facilities and equipment where this does not compromise meeting system specifications. When existing equipment is inadequate then strive to find commercial off the shelf (COTS) hardware and software for the test equipment. If it is necessary to design new special purpose test equipment then consider whether future system tests are likely to require similar new special purpose designs. If so it may be wise to use pattern base systems engineering for the test equipment as well as the system.
Where possible use test methodologies and test procedures that have been validated through prior use. If changes are necessary developing Test Plans and Procedures by editing documentation from previous system test programs is likely to be faster, less costly and less prone to errors than writing new plans. Sometimes test standards are available from government agencies.
9.2.4 Test Data Analysis – Data collected during systems tests often requires considerable analysis in order to determine if performance is compliant with requirements. The quantity and types of data analysis needed should be identified in the test plans and the actions needed to accomplish this analysis are to be included in the test procedures. Often special software is needed to analyze test data. This software must be developed in parallel with other system software since it must be integrated with test equipment and validated by the time the system completes integration. Also some special test and data analysis software may be needed in subsystem tests during integration. Careful planning and scheduling is necessary to avoid project delays due to data analysis procedures and software not being complete and validated by the time it is needed for system tests.

Friday, July 8, 2011

Methods for Verifying System Performance

9.2 Verifying that System Requirements are Met
The first phase of verifying system requirements is a formal engineering process that starts with requirements analysis and ends when the system is accepted by its customer.  During the system integration and testing steps must be taken to verify that the system satisfies every “shall” statement in the requirements. These shall statement requirements are collected in a document called the Verification Matrix. The results of the integration and testing of these requirements are documented in a Compliance Matrix. The integration and testing is defined and planned in a System Integration and Test Plan. Related documentation includes the Test Architecture Definition, hardware and software Test Plans & Procedures and Test Data Analysis Plans.
The roles of systems engineers in verification include:
  • Developing the optimum test strategy and methodology and incorporating it into the design as it is developed
  • Developing the top level System Integration and Test Plan
  • Developing the hierarchy of Integration and Test Plans from component level to system level
  • Documenting all key system and subsystem level tests
  • Defining system and subsystem level test equipment needed and developing test architecture designs
  • Developing the Test Data Analysis Plans
  • Analyzing test data
  • Ensuring that all shall requirements are verified and documented in the Compliance Matrix.
Good systems engineering practice requires that requirements verification takes place in parallel with requirements definition. The decision to define a requirement with a “shall” or “may” or “should” statement involves deciding if the requirement must be verified and if so how the requirement will be verified. This means that the requirements verification matrix should be developed in parallel with the system requirements documentation and reviewed when the system requirements are reviewed, e.g. at peer reviews and at a formal System Requirements Review (SRR).
9.2.1 Verification Matrix – The verification matrix is documentation that defines for each requirement the verification method, the level and type of unit for which the verification is to be performed and any special conditions for the verification. Modern requirements management tools facilitate developing the verification matrix. If such tools are not used then the verification matrix can be developed using standard spreadsheet tools.
There are standard verification methods used by systems engineers. These methods are:
  1. Analysis -Verifies conformance to required performance by the use of analysis based on verified analytical tools, modeling or simulations that predict the performance of the design with calculated data or data from lower level component or subsystem testing. Used when physical hardware and/or software is not available or not cost effective.
  2. Inspection - Visually verifies form, fit and configuration of the hardware and of software. Often involves measurement tools for measuring dimensions, mass and physical characteristics.
  3. Demonstration - Verifies the required operability of hardware and software without the aid of test devices. If test devices should be required they are selected so as to not contribute to the results of the demonstration.
  4. Test - Verifies conformance to required performance, physical characteristics and design construction features by techniques using test equipment or test devices. Intended to be a detailed quantification of performance.
  5. Similarity - Verifies requirement satisfaction based on certified usage of similar components under identical or harsher operating conditions.
  6. Design – Used when compliance is obvious from the design, e.g. “The system shall have two modes, standby and operation”.
  7. Simulation – Compliance applies to a finished data product after calibration or processing with system algorithms. May be only way to demonstrate compliance.
The DoD SEF defines only the first four of the methods listed above. Many experienced systems engineers find these four too restrictive and also use the other three methods listed. To illustrate a verification matrix with an example consider the function Switch Power. This function might be decomposed as shown in Figure 9-1.

Figure 9-1 A function Switch Power might be decomposed into four sub functions.
An example verification matrix for the functions shown in Figure 9-1 is shown in Figure 9-2. In this example the switch power function is assumed to be implemented in a switch module and that both an engineering model and a manufacturing prototype are constructed and tested. In this example no verification of the switch module itself is specified for production models. Verification of the module performance for production modules is assumed to be included in other system level tests.
It’s not important whether the verification matrix is generated automatically from requirements management software or by copy and paste from a requirements spreadsheet. What is important is to not to have to reenter requirements from the requirements document to the verification matrix as this opens the door for simple typing mistakes.

Figure 9-2 An example verification matrix for a switch module.

Wednesday, July 6, 2011

Verifying the Performance of a System Design

9 Processes and Tools for Verifying Technical Performance
9.0 Introduction
There are two approaches to verifying technical performance. One is using good engineering practices in all the systems engineering and design work to ensure that the defined requirements and the design meets customer expectations. The other is a formal verification process applied in two phases to hardware and software resulting from the design to verify that requirements are met. Both begin during requirements analysis and continue until a system is operational. The work is represented by the three arrows labeled verification in Figure 6-4 that constitute the backward part of the requirements loop, the design loop and the loop from design synthesis back to requirements analysis and includes both verifying completeness and accuracy of the design and verifying the technical performance of the system.
The first phase of formal system verification typically ends with delivery of a system to the customer but may include integration and testing with a higher level system of the customer or another supplier. The second phase of system verification is accomplished by testing the system in its intended environment and used by its intended users. This phase is typically called operational test and evaluation and is the responsibility of the customer for military systems but may involve the supplier for commercial systems and some NASA systems.
9.1 Verifying Design Completeness and Accuracy
Verifying the completeness and accuracy of the design is achieved by a collection of methods and practices rather than a single formal process. The methods and practices used by systems engineers include:
  • System engineers checking their own work
  • Checking small increments of work via peer reviews
  • Conducting formal design reviews
  • Using diagrams, graphs, tables and other models in place of text where feasible and augmenting necessary text with graphics to reduce ambiguity
  • Using patterns vetted by senior systems engineers to help ensure completeness and accuracy of design documentation
  • Developing and comparing the same design data in multiple formats, e.g. diagrams and matrices or diagrams and tables
  • Verifying functional architecture by developing a full set of function and mode mapping matrices including:
    • Customer defined functions to functions derived by development team (Some team derived functions are explicit in customer documentation and some are implicit.)
    • Functions to functions for defining internal and external interfaces among functions
    • Sub modes to functions for each of the system modes
    • Mode and sub mode transition matrices defining allowable transitions between modes and between sub modes
  • Using tools such as requirements management tools that facilitate verifying completeness and traceability of each requirement
  • Using QFD and Kano diagrams to ensure completeness of requirements and identify relationships among requirements
  • Using robust design techniques such as Taguchi Design of Experiments
  • Iterating between requirements analysis and functional analysis and between design synthesis and functional analysis
  • Employing models and simulations to both define requirements and verify that design approaches satisfy performance requirements
  • Validating all models and simulations used in systems design before use (Note the DoD SEF describes verification, validation and accreditation of models and simulations. Here only verification and validation are discussed.)
  • Employing sound guidelines in evaluating the maturity of technologies selected for system designs
  • Maintaining a through risk management process throughout the development program
  • Conducting failure modes and effects analysis (FMEA) and worse case analysis (WCA).
Engineers checking their own work are the first line of defense against human errors that can affect system design performance. One of the most important things that experienced engineers can teach young engineers is the importance of checking all work and the collection of methods for verifying their work that they have learned over their years in the engineering profession. Engineers are almost always working under time pressure and it takes disciple to take the time to check work at each step so that simple human mistakes don’t result in having to redo large portions of work. This is the same principle that is behind using peer reviews to catch mistakes early so that little rework is required rather than rely on catching mistakes at major design reviews where correcting mistakes often requires significant rework, with significant impact on schedule and budget.
A reason for presenting duplicate methods and tools for the same task in Chapter 6 was not just that different people prefer different methods but also to provide a means of checking the completeness and accuracy of work. The time it takes to develop and document a systems engineering product is a usually a small fraction of the program schedule so that taking time to generate a second version in a different format does not significantly impact schedule and is good insurance against incomplete or inaccurate work.
Pattern based systems engineering, QFD and Taguchi Design of Experiments (DOE) help ensure the completeness, accuracy and robustness of designs. Extensive experience has demonstrated the cost effectiveness of using these methods even though QFD and Taguchi DOE require users to have specialized training to be effective.
Studies have shown that selecting immature technologies result in large increases in costs (See http://www.dau.mil/conferences/presentations/2006_PEO_SYSCOM/tue/A2-Tues-Stuckey.pdf) Immature technologies often do not demonstrate expected performance in early use and can lead to shortfalls in the technical performance of designs. As a result both NASA and DoD include guidelines for selecting technologies in their acquisition regulations. Definitions of technology readiness levels used by NASA are widely used and are listed at http://esto.nasa.gov/files/TRL_definitions.pdf . Definitions are also provided in Supplement 2-A of the DoD SEF.
Failure analysis methods like FEMA and WCA are more often used by design engineers than systems engineers but systems engineers can include failure modes and worse case considerations when defining the criteria used in system design trade studies.
In summary, verifying technical performance during system development includes the disciplined use of good engineering practices as well as the formal performance verification process. This should not be surprising as these practices have evolved through experience specifically to ensure that system designs meet expected performance as well as other requirements.