- System engineers checking their own work
- Checking small increments of work via peer reviews
- Conducting formal design reviews
- Using diagrams, graphs, tables and other models in place of text where feasible and augmenting necessary text with graphics to reduce ambiguity
- Using patterns vetted by senior systems engineers to help ensure completeness and accuracy of design documentation
- Developing and comparing the same design data in multiple formats, e.g. diagrams and matrices or diagrams and tables
- Verifying functional architecture by developing a full set of function and mode mapping matrices including:
- Customer defined functions to functions derived by development team (Some team derived functions are explicit in customer documentation and some are implicit.)
- Functions to functions for defining internal and external interfaces among functions
- Sub modes to functions for each of the system modes
- Mode and sub mode transition matrices defining allowable transitions between modes and between sub modes
- Using tools such as requirements management tools that facilitate verifying completeness and traceability of each requirement
- Using QFD and Kano diagrams to ensure completeness of requirements and identify relationships among requirements
- Using robust design techniques such as Taguchi Design of Experiments
- Iterating between requirements analysis and functional analysis and between design synthesis and functional analysis
- Employing models and simulations to both define requirements and verify that design approaches satisfy performance requirements
- Validating all models and simulations used in systems design before use (Note the DoD SEF describes verification, validation and accreditation of models and simulations. Here only verification and validation are discussed.)
- Employing sound guidelines in evaluating the maturity of technologies selected for system designs
- Maintaining a through risk management process throughout the development program
- Conducting failure modes and effects analysis (FMEA) and worse case analysis (WCA).
Wednesday, July 6, 2011
9 Processes and Tools for Verifying Technical Performance
There are two approaches to verifying technical performance. One is using good engineering practices in all the systems engineering and design work to ensure that the defined requirements and the design meets customer expectations. The other is a formal verification process applied in two phases to hardware and software resulting from the design to verify that requirements are met. Both begin during requirements analysis and continue until a system is operational. The work is represented by the three arrows labeled verification in Figure 6-4 that constitute the backward part of the requirements loop, the design loop and the loop from design synthesis back to requirements analysis and includes both verifying completeness and accuracy of the design and verifying the technical performance of the system.
The first phase of formal system verification typically ends with delivery of a system to the customer but may include integration and testing with a higher level system of the customer or another supplier. The second phase of system verification is accomplished by testing the system in its intended environment and used by its intended users. This phase is typically called operational test and evaluation and is the responsibility of the customer for military systems but may involve the supplier for commercial systems and some NASA systems.
9.1 Verifying Design Completeness and Accuracy
Verifying the completeness and accuracy of the design is achieved by a collection of methods and practices rather than a single formal process. The methods and practices used by systems engineers include:
Engineers checking their own work are the first line of defense against human errors that can affect system design performance. One of the most important things that experienced engineers can teach young engineers is the importance of checking all work and the collection of methods for verifying their work that they have learned over their years in the engineering profession. Engineers are almost always working under time pressure and it takes disciple to take the time to check work at each step so that simple human mistakes don’t result in having to redo large portions of work. This is the same principle that is behind using peer reviews to catch mistakes early so that little rework is required rather than rely on catching mistakes at major design reviews where correcting mistakes often requires significant rework, with significant impact on schedule and budget.
A reason for presenting duplicate methods and tools for the same task in Chapter 6 was not just that different people prefer different methods but also to provide a means of checking the completeness and accuracy of work. The time it takes to develop and document a systems engineering product is a usually a small fraction of the program schedule so that taking time to generate a second version in a different format does not significantly impact schedule and is good insurance against incomplete or inaccurate work.
Pattern based systems engineering, QFD and Taguchi Design of Experiments (DOE) help ensure the completeness, accuracy and robustness of designs. Extensive experience has demonstrated the cost effectiveness of using these methods even though QFD and Taguchi DOE require users to have specialized training to be effective.
Studies have shown that selecting immature technologies result in large increases in costs (See http://www.dau.mil/conferences/presentations/2006_PEO_SYSCOM/tue/A2-Tues-Stuckey.pdf) Immature technologies often do not demonstrate expected performance in early use and can lead to shortfalls in the technical performance of designs. As a result both NASA and DoD include guidelines for selecting technologies in their acquisition regulations. Definitions of technology readiness levels used by NASA are widely used and are listed at http://esto.nasa.gov/files/TRL_definitions.pdf . Definitions are also provided in Supplement 2-A of the DoD SEF.
Failure analysis methods like FEMA and WCA are more often used by design engineers than systems engineers but systems engineers can include failure modes and worse case considerations when defining the criteria used in system design trade studies.
In summary, verifying technical performance during system development includes the disciplined use of good engineering practices as well as the formal performance verification process. This should not be surprising as these practices have evolved through experience specifically to ensure that system designs meet expected performance as well as other requirements.