The collection of blog articles posted on this site is now available in book form for $19.50. You can order it from Create Space or from Amazon . The book includes edited versions of the blog articles plus other typical features of a book such as a table of contents and index. The title of the book is Modern Methods of Systems Engineering: With an Introduction to Pattern and Model Based Methods.
You can learn more about the book and the authors at our web site. Our plan is to add additional material to this blog as the authors feel it can contribute to systems engineering methods. This material will be added as it is available rather than on a regular weekly basis. The authors welcome comments on any of the blog articles and corrections, suggestions or other comments on the book. If any readers have material they feel contributes to systems engineering methods that isn't covered in the book please contact us via a comment and we will consider adding your material to this blog.
The Manager’s Guide contains blog posts on Leadership and Systems Engineering. The Leadership posts provide a self-study course in leadership for managers and for workers who wish to prepare themselves for management. The articles address motivating people and improving processes. People and processes are common to every type of organization so the course applies to any organization. The older posts cover Systems Engineering and can be found in the archive or by searching on key words.
Search This Blog
Showing posts with label Systems Engineering. Show all posts
Showing posts with label Systems Engineering. Show all posts
Saturday, December 10, 2011
Monday, October 24, 2011
12.3 Return to Chief Designer Model
Implementing ICE allows system development teams to function similarly to the model of chief designer and draftsman/assistant team popular before the emergence of modern complex systems in the 1960s. The large screen displays in a design command center and the supporting analysis models and simulations bring design information to the lead systems engineer with very little information latency. The lead systems engineer in a design session can interact with the design team just as a chief designer interacted with the draftsman/assistants in former times. This may be as near to the efficiency of the “craftsman” model as can be expected for the development of complex systems. Lead systems engineers can be empowered to function as chief designers for the systems engineering work in a mature ICE environment supported by comprehensive analysis, modeling and simulation tools. The lead systems engineer can be empowered to function as the chief designer for the entire development cycle if supported by specialist chief designers who are responsible for the electrical design, the mechanical design, etc.
Implementing ICE with an overall chief designer and supporting specialty chief designers for each IPT allows interleaving IPT design sessions with SEIT design sessions so that the desired iteration between levels of design and the coordination between IPTs necessary to maintain balance in the design can be achieved and the schedule for the development is likely to be significantly reduced.
The actual times it takes for the planning and for the documentation and analysis periods are highly dependent on the sophistication of the tools used by the design team. If pattern based systems engineering is used and if the team’s modeling and simulation tools are extensive and mature then the planning and the documentation/analysis periods may be possible to be integrated into the design sessions so that the design work becomes a continuous series of three to four hour intense design sessions in the design command center followed by a day or two of planning/documentation/analysis, followed by another design session. Alternatively, the team may be organized with design specialists and documentation specialists. The design specialists conduct analysis, modeling and simulations to determine design parameters. The documentation specialists capture the design parameters and product the necessary specifications, drawings and CDRLs while the design specialists are generating the next layer of design parameters.
12.4 Integrating Modern Methods
The 21st century brought new constraints to system development:
- Customers and global competition are demanding faster and cheaper system development
- Skilled engineers are retiring faster than replacements are experienced enough to replace them
- Development teams are spread across multiple sites and multiple organizations.
This new century has also brought new tools for system development:
- Fast internet and intranet connections provide real time communication across multiple sites
- Relatively cheap but powerful computers and network communication tools
- Model based and Pattern Based Systems Engineering processes
- Powerful CAD tools
- Maturing integrated design and design documentation processes
- Some integrated design and manufacturing tools
- Potential for end to end documentation management
The question for systems engineers is how to use the new tools to relieve the new constraints.
One answer to this question is to integrate the methods described in this and previous chapters with disciplined execution of the traditional fundamentals of the systems engineering process.
Figure 12-4 illustrates methods that can be synergistically integrated to achieve reductions in design time of factors of three to ten and cost reduction by factors of two to three. These benefits are not achieved instantly. Training is needed for teams to use these methods effectively. Investment is necessary to achieve the best results of PBSE and to push patterns down from the system level to subsystem and assembly levels. Ongoing investment is necessary to maintain the modeling, simulation, software development and CAD/CAM tools required to remain competitive. Document generation and document management tools are likely to require investments and training to effectively reduce engineering effort. Finally it must be recognized that systems engineering is going to continually evolve by inventing new processes and tools and by introducing new methods and tools for executing current processes.
The rapid introduction of new tools and processes in the past two decades have increased the fraction of a systems engineer’s time that must be spent in training and self-study in order to maintain required skills. This is likely to continue. The increases in complexity of new systems are also likely to continue and these complexity increases may require more sophisticated systems engineering processes than available today. Hopefully new methods and tools will be developed that can handle increased system complexity and the increases in productivity from using new methods are enough to make time available for the training and self-study systems engineers will need.
Figure 12-4 The methods described in this book can be integrated to provide a robust approach to system development that can achieve dramatic reductions in cost and design time.
Tuesday, October 18, 2011
12.2 Integrated Concurrent Engineering (ICE) for Small Teams
The ICE approach described in Section 12.1.1 and 12.1.2 applies to teams of 15 to several hundred people; assuming the large teams are organized into smaller IPTs of 10 to 25 people. The design command centers can be shared by many individual teams on a development project because each team uses the center for only a half day at a time and for only three to ten days a month typically. Some system developments can be accomplished with smaller teams of five to ten people. Whereas small teams can also use the same design command center and concept of operations as larger teams an alternative approach may be even more efficient.
Work spaces for most organizations use individual cubicles or cubicles shared by two or three people. Most of these work spaces are modular and can easily be reconfigured. For example suppose a project has six or seven workers each in his/her cubicle. Typically, workers are assigned cubicles without consideration of where others working on the same projects are located. Much of the communication takes place via emails or periodic meetings in a conference area. Figure 12-3 shows how a space of eight cubicles can be rearranged to colocate seven workers and a conference table. Collocating workers as shown in Figure 12-3 enables continuous face to face interactions to replace emails and periodic meetings in conference rooms. Research has shown that problems are solved much faster by groups communicating face to face compared to groups communicating via email. That is to be expected because the information latency in face to face communications is almost instantaneous whereas it is many seconds or even hours with email.
It increases productivity to have two workers with related skills close enough together that they can see each other’s computer screens and discuss what is on the screen without moving from their work positions. Examples include mechanical and thermal engineers or mechanical engineers and designers skilled in mechanical CAD tools that are supporting the engineers.
If the team leader is collocated with the rest of the team so that he/she can facilitate an ICE process then dramatic reductions in project cost and design time should be realized just as it is for larger teams using ICE. A caution is that team dynamics are more important for collocated teams than for teams in individual cubicles. Teams must be comprised of individuals who work well together or else productivity suffers. Workers who perform better as individual contributors are likely better left in their own cubicle. It is also advisable to provide training so that the workers understand why they are being asked to give up the privacy of individual cubicles.
Tuesday, October 11, 2011
12 Integrating Modern Methods for Faster Systems Engineering
12.0 Introduction
In chapter 2 it was explained that the best model for system development is the “craftsman” model that was widely used before systems became so complex that a single chief engineer could no longer understand a system in sufficient detail to control all aspects of design. System engineers, design engineers and other specialty engineers became necessary to handle the complexity of modern systems. Although this new approach has enabled the development of very complex modern systems it takes much longer to develop a system now than it used to take when a chief engineer and his/her team could develop a new system in a few months.
One objective of this book is to introduce new methods that enable the systems engineering work on system development to be accomplished faster and more accurately. This book has an emphasis on systems engineering fundamentals, as described in the DoD SEF and the NASA SE handbook, and readers will note that it takes time and discipline to follow these fundamental processes. Complex systems cannot be developed cost effectively by shortcutting the systems engineering fundamentals; what is necessary is faster and more accurate methods for executing these fundamentals. Accuracy is required because any errors in systems documentation results in costly “find and fix” efforts later in design or in integration and test. Several methodologies for ensuring accuracy have been discussed including using graphical models in place of text as much as possible, employing redundant tools for developing documentation, using modeling and simulation to support requirements analysis as well as design and checking work at the three levels of worker checking his/her work, peer reviews and design reviews.
In chapter 5 pattern based systems engineering was introduced, which when properly implemented, can dramatically reduce the time to produce much of the top level systems engineering documentation and at the same time increase the accuracy of requirements definition. Similarly using validated system performance models and simulations throughout the development cycle aids in reducing development time and increases the accuracy of requirements and design concepts and the robustness of systems.
The objective of this chapter is to describe methods for reducing information latency and then to show how integrating modern methods can achieve greatly reduced time for systems engineering work without sacrificing any process fundamentals critical to the accuracy of this work. Information latency is the time between when information is generated and the time it is available to others who are depending on the information for the next steps in their work. Information latency was increased with the evolution from the craftsman model for product development to models with systems engineers; this is the primary reason modern systems take so long in development. Reducing information latency to levels near what it was for the craftsman model is a necessary step in achieving faster system development cycles.
12.1 Integrated Concurrent Engineering
In the 1990s a method emerged for reducing information latency for system development teams. This method is similar to methods used previously when teams of workers were brought together in a common work area to collaborate to quickly accomplish some project. Many organizations in the aerospace and defense industry use special work areas to collocate the people writing and publishing proposals, which are often highly time constrained projects. The use of proposal preparation rooms with personal dedicated to working in these rooms results in highly productive teams for the limited times involved in typical proposal efforts. A major part of the increased productivity is due to the reduction in information latency achieved by having workers so close they can ask questions of one another and get immediate answers. If teams tried to maintain such intense work over long periods productivity would taper off due to workers being unable to maintain the long hours and intense work without burnout.
The methods that evolved in the 1990s achieve the reduction in information latency and the associated productivity gains of the colocated teams and permit teams to work effectively for long periods without burnout. These methods became possible by exploiting new technology as well as new work management methods.
The availability of inexpensive large screen display projectors, n to one video switches and inter/intra nets makes it cost effective to set up special work rooms where teams of 10 to 25 knowledge workers can gather with their laptops and software tools. These teams can simultaneously work and share the work results with the entire team on the large screen displays as fast as the results are available. Many organizations now use such facilities for teams to gather for intense work and information sharing periods of three to four hours two or three times weekly. These sessions must be well planned and workers must come prepared to work and share results in real time. Planning, documenting work and time consuming tasks are performed in between the sessions in the special work rooms. This approach is called by a number of names but Integrated Concurrent Engineering (ICE) is a common name. This approach is effective because it reduces information latency from minutes to seconds or hours to minutes.
ICE is proven to reduce cost and schedule of complex projects by factors of three to ten 12-1, 12-2. Neff & Presley 12-3 reported that the Jet Propulsion Laboratory initially achieved an average of over 80% reduction in project costs and significantly improved the quality and speed of work. With more experience a 92% reduction in design time and a 66% reduction in cost was reported. Designs produced using ICE are of higher quality because they examine each option in greater detail earlier in the design process by sharing thousands of design variables in real time. Approaches that are proven to reduce cost and schedule by factors of three to ten and increase quality at the same time should not be dismissed by organizations that wish to remain competitive.
The benefits of ICE are better understood by examining the work space and the work process in more detail. There is no single best work space design or work process; each organization tailors both to their views and their business processes. Examples presented here are guidelines for understanding ICE and not necessarily the best for any specific organization.
12.1.1 The ICE Design Command Center- A schematic diagram of a small ICE work area is shown in Figure 12-1. The room has large screen displays located where they are visible to everyone in the room. Several displays are used so that several different types of information can be displayed simultaneously. Each skill cluster has workers with common specialties and each worker has computer equipment and the design, modeling and simulation tools associated with his/her specialty. Alternatively each cluster can be an IPT responsible for a segment of the system design. Each of the computers is connected to one of the large screen displays via a video switch so that the results of analysis, modeling or simulation can be shared with everyone in the room on one of the large screen displays. The facilitator, typically the lead systems engineer for the systems engineering phase of development, is responsible for maintaining the design baseline visible to all at all times and to lead the team through a preplanned sequence of analysis tasks that lead to design decisions in real time.
12.1.2 The ICE Concept of Operations - Integrated Concurrent Engineering is a repeating series of planning sessions followed by team work sessions, followed by documentation and follow-up analysis in parallel with the planning for the next series of team work sessions. The times for each of the components of the ICE cycle are dependent on the type and complexity of the system being developed. Example times are given here to explain the concept of operations. Development teams are likely to find adopting this concept of operations to their systems development requires adjustments. A typical approach is illustrated in Figure 12-2 where a series of three plan/ meet/document sessions are shown and each of the meet or design sessions is comprised of three intense team sessions separated by a day or two. Individual design sessions may last from two to four hours.
The planning, indicated by A in Figure 12-2, is done by team leaders and might take a week to plan a series of three intense work sessions, indicated by B, over another week period. The series of work sessions is followed by perhaps two weeks of documenting work done in the design sessions and carrying out analyses that takes too much time to be done in design sessions. In the example shown in figure 12-2 nine intense design sessions are planned, executed and documented in a an eight week period. Note that since the design sessions are the only activities that require the ICE design command center such a center can support three or four ICE projects or separate IPTs of a large project concurrently.
12-1 The Integrated Concurrent Enterprise by David B. Stagney, MIT Department of Aeronautics and Astronautics, Sloan School of Management, August 13, 2003
12-2 Observation, Theory, and Simulation of Integrated Concurrent Engineering by John Chachere, John Kunz, and Raymond Levitt, Center For Integrated Facility Engineering, Working Paper #WP087, Stanford University, August 2004
12-3 Implementing a Collaborative Conceptual Design System
– The Human Element is the Most Powerful Part of the System by Jon Neff and Stephen P Presley, IEEE, 2000.
Wednesday, September 14, 2011
11 Introduction to Model Based Systems Engineering
11.0
Introduction
The advantages of using labeled graphical models, diagrams, tables
of data and similar non prose descriptions compared to natural language or
prose descriptions have been discussed several times. Now we make a distinction
between two types of models. One type is, as stated, a non-prose description of
something. The second type is analysis models; either static models that
predict performance or dynamic models referred to as simulations. Static
analysis models may be strictly analytical or may be machine readable and
executable. Modern simulations are typically machine readable and executable.
This is an arbitrary distinction as the DoD defines a model as a physical,
mathematical, or otherwise logical representation of a system, entity,
phenomenon, or process. (DoD 5000.59 -M 1998)
In Chapter 5 it was stated that PBSE is model based but includes
prose documents as well. The models used in PBSE can be either the first type
or the second type. Now we want to introduce a different approach to using
models for systems engineering. This approach is called Model Based System
Engineering (MBSE) and it strives to accomplish system engineering with models
that are machine readable, executable or operative. An INCOSE paper11-1
defines MBSE as an approach to engineering that uses models as an integral part
of the technical baseline that includes the requirements, analysis, design,
implementation, and verification of a capability, system, and/or product
throughout the acquisition life cycle.
This chapter is an introduction to MBSE; no attempt is made to
review or even summarize the extensive literature on MBSE. MBSE is rapidly
evolving, facilitated both by development of commercial tools and by an INCOSE
effort to extend the maturity and capability of MBSE over the decade from 2010
to 2020. Whereas we attempt to describe how MBSE offers benefits compared to
traditional prose based systems engineering it isn’t claimed that pure MBSE is
superior or inferior to methodologies that mix MBSE, PBSE and traditional
methods. The intent is to provide an introduction that enables readers to
assess how MBSE can be beneficial to their work and to point the way toward
further study.
Traditional systems engineering is a mix of prose based material,
typically requirements and plans, and models such as functional diagrams,
physical diagrams and mode diagrams. Eventually design documentation ends in
drawings, which are models. MBSE can be thought of as replacing the prose
documents that define or describe a system, such as requirements documents,
with models. We are not concerned as much with plans although plans like test
plans are greatly improved by including many diagrams, photos and other models
with a minimum of prose.
To some it may seem difficult to replace requirements documents
with models. However, QFD can be stand-alone systems engineering process and
QFD is a type of MBSE. Although it does not attempt to heavily employ machine
readable and executable models, QFD is an example of defining requirements in
the form of models. Another way to think about requirements is that
mathematically requirements are graphs and can therefore be represented by
models. A third way to think about requirements as models is as tree
structures. Each requirement may have parent requirements and daughter requirements
and just as no leaf of a tree can exist without connection to twigs, twigs to
limbs, and limbs to the trunk no requirement can stand alone. Trees can be
represented by diagrams so requirements can all be represented in a diagram.
Throughout this book there is an emphasis on representing design
information as models in order to reduce ambiguity and the likelihood of
misinterpretation of text based design information. There is also an emphasis
on using analysis models and simulations as much as possible throughout the
life cycle of a system development. The use of models and simulations improves
functional analysis, design quality, system testing and system maintenance.
Think of MBSE as combining these two principles; then it becomes clear why MBSE
is desirable. Another way to look at traditional systems engineering vs. MBSE
is for traditional systems engineering engineers write documents and then
models are developed from the documents. In MBSE the approach is to model what
is to be built from the beginning.
Model based design has been standard practice for many engineering
specialties since the 1980s. Structural analysis, thermal analysis, electrical
circuit analysis, optical design analysis and aerodynamics are a few examples
of the use of Computer Aided Design (CAD) or model based design analysis. It is
systems engineering that been slow to transition from non-model based methods,
with the exception of performance modeling and simulation. To achieve the
benefits of MBSE systems engineers need to embrace requirements diagrams, Use
Case analysis and other MBSE tools along with performance modeling and
simulation.
11.1
Definitions of Models As Applied to MBSE
Models have been referred to throughout this material without
providing a formal definition or defining the types of models typically used in
systems engineering. Formally, a model is a representation of something, as
described in the DoD definition given above. For our purposes a model is a
representation of a design element of a system. Types of models of interest to
MBSE include11-2:
Schematic
Models: A chart or diagram showing
relationships, structure or time sequencing of objects. For MBSE schematic
models should have a machine-readable representation. Examples include FFBDs,
interface diagrams and network diagrams.
Performance
Model: An executable representation
that provides outputs of design elements in response to inputs. If the outputs
are dynamic then the model is called a simulation.
Design Model: A machine interpretable version of the detailed design of a
design element. Design models are usually represented by CAD drawings, VHDL, C,
etc.
Physical
model: A physical representation that is
used to experimentally provide outputs in response to inputs. A breadboard or
brass board circuit is an example.
Achieving machine readable and executable models means that the
models must be developed using software. Useful languages used by software and
systems engineers for such models are the Unified Modeling LanguageTM
(UML®) and its derivative SysML™. A brief introduction to these languages is
presented here along with references for further study.
Tuesday, August 30, 2011
Chapters Available for Viewing or Download
The first 10 chapters of the book Introduction to Pattern and Model Based Systems Engineering are available for viewing or download at https://sites.google.com/site/themanagersguide/system-engineering . These ten chapters include all of the material posted to date on this blog. The final two chapters are planned to be posted in September, first as blogs on this site and then as complete downloadable chapters on the site above. After the final chapters are posted a book will be published with all the chapters. This book is intended as a guide for training in modern methods that speed up systems engineering, reduce the cost of the systems engineering phase of product development and improve the quality of the systems engineering work.
Friday, July 15, 2011
Summarize Verification Results in a Compliance Matrix
9.2.5 Compliance Matrix – The data resulting from the actions summarized in the verification matrix for verifying that the system meets all requirements are collected in a compliance matrix. The compliance matrix shows performance for each requirement. It flows performance from the lowest levels of the system hierarchy up to top levels. It identifies the source of the performance data and shows if the design is meeting all requirements. The bottom up flow of performance provides early indication of non-compliant system performance and facilitates defining mitigation plans if problems are identified during verification actions. An example compliance matrix for the switch module is shown in Figure 9-3.
Note that the requirements half of the compliance matrix is identical to the requirements half of the verification matrix. The compliance matrix is easily generated by adding new columns to the verification matrix. Results that are non-compliant, such as the switching force, or marginally compliant, such as the on resistance, can be flagged by adding color to one of the value, margin or compliant columns or with notes in the comments column.
In summary, the arrows labeled verification in Figure 6-4 from functional analysis to requirements analysis, from design to functional analysis and from design to requirements analysis relate to the iteration that the systems engineers do to ensure the design is complete and accurate and that all “shall” requirements are verified in system integration and system test. This iteration is necessary so that for each requirement a verification method is identified, any necessary test equipment, test software and data analysis software is defined in time to have validated test equipment, test procedures and test data analysis software ready when needed for system integration and test.
9.3 Systems Engineering Support to Integration, Test and Production
Manufacturing personnel and test personnel may have primary responsibility for integration, test and production however; systems engineers must provide support to these tasks. Problem resolution typically involves both design and systems engineers and perhaps other specialty engineers depending on the problem to be solved. Systems engineers are needed whenever circumstances require changes in parts or processes to ensure system performance isn’t compromised.
Tuesday, July 12, 2011
Planning for System Integration and Testing
9.2.2 System Integration and Test Plan – The purpose of the System Integration and Test Plan (SITP) is to define the step by step process for combining components into assemblies, assemblies into subsystems and subsystems into the system. It is also necessary to define at what level software is integrated and the levels for conducting verification of software and hardware. Because of the intimate relationship of the verification matrix to the system integration and test it is recommended that the SITP be developed before the verification matrix is deemed complete.
The SITP defines the buildup of functionality and the best approach is usually to build from the lowest complexity to higher complexity. Thus, the first steps in integration are the lowest levels of functionality; e.g. backplanes, operating systems and electrical interfaces. Then add increasing functionality such as device drivers, functional interfaces, more complex functions and modes. Finally implement system threads such as major processing paths, error detection paths and end-to-end threads. Integration typically happens in two phases: hardware to hardware and software to hardware. This is because software configured item testing often needs operational hardware to be valid. Two general principles to follow are: test functionality and performance at the lowest level possible and, if it can be avoided, do not integrate any hardware or software whose functionality and performance has not been verified. It isn’t always possible to follow these principles, e.g. sometimes software must be integrated with hardware before either can be meaningfully tested.
One objective of the SITP is to define a plan that avoids as much as possible having to disassemble the system to implement fixes to problems identified in testing. A good approach is to integrate risk mitigation into the SIPT. For example, there is often a vast difference between the impact of an electrical design problem and a mechanical or optical design problem. Some electrical design or fabrication problems discovered in I & T of an engineering model can be corrected with temporary fixes (“green wires”) and I & T can be continued with minimal delay. However, a serious mechanical or optical problem found in the late stages of testing, e.g. in a final system level vibration test, can take months to fix due to the time it takes to redesign and fabricate mechanical or optical parts and conduct the necessary regression testing. Sometimes constructing special test fixtures for early verification of the performance of mechanical, electro-mechanical or optical assemblies is good insurance against discovering design problems in the final stages of I & T.
The integration plan can be described with an integration flow chart or with a table listing the integration steps in order. An integration flow chart graphically illustrates the components that make up each assembly, the assemblies that make up each subsystem etc. Preparing the SITP is an activity that benefits from close cooperation among system engineers, software engineers, test engineers and manufacturing engineers. For example, system engineers typically define the top level integration flow for engineering models using guidelines listed above. Manufacturing engineers typically define the detailed integration flow to be used for manufacturing prototypes and production models. If the system engineers use the same type of documentation for defining the flow for the engineering model that manufacturing engineers use then it is likely that the same documentation can be edited and expanded by manufacturing engineers for their purposes.
It should be expected that problems will be identified during system I & T. Therefore processes for reporting and resolving failures should be part of an organizations standard processes and procedures. System I & T schedules should have contingency for resolving problems. Risk mitigation plans should be part of the SITP and be in place for I & T; such as having adequate supplies of spare parts or even spare subsystems for long lead time and high risk items.
System integration is complete when a defined subset of system level functional tests has been informally run and passed, all failure reports are closed out and all system and design baseline databases have been updated. The final products from system integration include the Test Reports, Failure Reports and the following updated documentation:
- Rebaselined System Definition
- Requirements documents and ICDs
- Test Architecture Definition
- Test Plans
- Test Procedures
- Rebaselined Design Documentation
- Hardware Design Drawings
- Fabrication Procedures
- Formal Release of Software including Build Procedures and a Version Description Document
- System Description Document
- TPM Metrics
It is good practice to gate integration closeout with a Test Readiness Review (TRR) to review the hardware/software integration results, ensure the system is ready to enter formal engineering or development model verification testing and that all test procedures are complete and in compliance with test plans. On large systems it is beneficial to hold a TRR for each subsystem or line replaceable unit (LRU) before holding the system level TRR.
9.2.3 Test Architecture Definition and Test Plans and Procedures – The SITP defines the tests that are to be conducted to verify performance at appropriate levels of the system hierarchy. Having defined the tests and test flow it is necessary to define the test equipment and the plans and procedures to be used to conduct the tests. Different organizations may have different names for the documentation defining test equipment and plans. Here the document defining the test fixtures, test equipment and test software is called the Test Architecture Definition. The test architecture definition should include the test requirements traceability database and test system and subsystems specifications.
Test Plans define the approach to be taken in each test; i.e. what tests are to be run, the order of the tests, the hardware and software equipment to be used and the data that is to be collected and analyzed. Test Plans should define the entry criteria to start tests, suspension criteria to be used during tests and accept/reject criteria for test results.
Test Procedures are the detailed step by step documentation to be followed in carrying out the tests and documenting the test results defined in the Test Plans. Other terminologies include a System Test Methodology Plan that describes how the system is to be tested and a System Test Plan that describes what is to be tested. Document terminology is not important; what is important is defining and documenting the verification process rigorously.
Designing, developing and validating the test equipment and test procedures for a complex system is nearly as complex as designing and developing the system and warrants a thorough systems engineering effort. Neglecting to put sufficient emphasis or resources on these tasks can result in delays of readiness of the test equipment or procedures and risks serious problems in testing due to inadequate test equipment or processes. Sound systems engineering practices treat test equipment and test procedure development as deserving the same disciplined effort and modern methods as used for the system under development.
The complexity of system test equipment and system testing drives the need for disciplined system engineering methods and is the reason for developing test related documentation in the layers of SITP, Test Architecture Definition, Test Plans and finally Test Procedures. The lower complexity top level layers are reviewed and validated before developing the more complex lower levels. This approach abstracts detail in the top levels making it feasible to conduct reviews and validate accuracy of work without getting lost in the details of the final documentation.
The principle of avoiding having to redo anything that has been done before also applies to developing the Test Architecture Definition, Test Plans and Test Procedures. This means designing the system to be able to be tested using existing test facilities and equipment where this does not compromise meeting system specifications. When existing equipment is inadequate then strive to find commercial off the shelf (COTS) hardware and software for the test equipment. If it is necessary to design new special purpose test equipment then consider whether future system tests are likely to require similar new special purpose designs. If so it may be wise to use pattern base systems engineering for the test equipment as well as the system.
Where possible use test methodologies and test procedures that have been validated through prior use. If changes are necessary developing Test Plans and Procedures by editing documentation from previous system test programs is likely to be faster, less costly and less prone to errors than writing new plans. Sometimes test standards are available from government agencies.
9.2.4 Test Data Analysis – Data collected during systems tests often requires considerable analysis in order to determine if performance is compliant with requirements. The quantity and types of data analysis needed should be identified in the test plans and the actions needed to accomplish this analysis are to be included in the test procedures. Often special software is needed to analyze test data. This software must be developed in parallel with other system software since it must be integrated with test equipment and validated by the time the system completes integration. Also some special test and data analysis software may be needed in subsystem tests during integration. Careful planning and scheduling is necessary to avoid project delays due to data analysis procedures and software not being complete and validated by the time it is needed for system tests.
Friday, July 8, 2011
Methods for Verifying System Performance
9.2 Verifying that System Requirements are Met
The first phase of verifying system requirements is a formal engineering process that starts with requirements analysis and ends when the system is accepted by its customer. During the system integration and testing steps must be taken to verify that the system satisfies every “shall” statement in the requirements. These shall statement requirements are collected in a document called the Verification Matrix. The results of the integration and testing of these requirements are documented in a Compliance Matrix. The integration and testing is defined and planned in a System Integration and Test Plan. Related documentation includes the Test Architecture Definition, hardware and software Test Plans & Procedures and Test Data Analysis Plans.
The roles of systems engineers in verification include:
- Developing the optimum test strategy and methodology and incorporating it into the design as it is developed
- Developing the top level System Integration and Test Plan
- Developing the hierarchy of Integration and Test Plans from component level to system level
- Documenting all key system and subsystem level tests
- Defining system and subsystem level test equipment needed and developing test architecture designs
- Developing the Test Data Analysis Plans
- Analyzing test data
- Ensuring that all shall requirements are verified and documented in the Compliance Matrix.
Good systems engineering practice requires that requirements verification takes place in parallel with requirements definition. The decision to define a requirement with a “shall” or “may” or “should” statement involves deciding if the requirement must be verified and if so how the requirement will be verified. This means that the requirements verification matrix should be developed in parallel with the system requirements documentation and reviewed when the system requirements are reviewed, e.g. at peer reviews and at a formal System Requirements Review (SRR).
9.2.1 Verification Matrix – The verification matrix is documentation that defines for each requirement the verification method, the level and type of unit for which the verification is to be performed and any special conditions for the verification. Modern requirements management tools facilitate developing the verification matrix. If such tools are not used then the verification matrix can be developed using standard spreadsheet tools.
There are standard verification methods used by systems engineers. These methods are:
- Analysis -Verifies conformance to required performance by the use of analysis based on verified analytical tools, modeling or simulations that predict the performance of the design with calculated data or data from lower level component or subsystem testing. Used when physical hardware and/or software is not available or not cost effective.
- Inspection - Visually verifies form, fit and configuration of the hardware and of software. Often involves measurement tools for measuring dimensions, mass and physical characteristics.
- Demonstration - Verifies the required operability of hardware and software without the aid of test devices. If test devices should be required they are selected so as to not contribute to the results of the demonstration.
- Test - Verifies conformance to required performance, physical characteristics and design construction features by techniques using test equipment or test devices. Intended to be a detailed quantification of performance.
- Similarity - Verifies requirement satisfaction based on certified usage of similar components under identical or harsher operating conditions.
- Design – Used when compliance is obvious from the design, e.g. “The system shall have two modes, standby and operation”.
- Simulation – Compliance applies to a finished data product after calibration or processing with system algorithms. May be only way to demonstrate compliance.
The DoD SEF defines only the first four of the methods listed above. Many experienced systems engineers find these four too restrictive and also use the other three methods listed. To illustrate a verification matrix with an example consider the function Switch Power. This function might be decomposed as shown in Figure 9-1.
Figure 9-1 A function Switch Power might be decomposed into four sub functions.
An example verification matrix for the functions shown in Figure 9-1 is shown in Figure 9-2. In this example the switch power function is assumed to be implemented in a switch module and that both an engineering model and a manufacturing prototype are constructed and tested. In this example no verification of the switch module itself is specified for production models. Verification of the module performance for production modules is assumed to be included in other system level tests.
It’s not important whether the verification matrix is generated automatically from requirements management software or by copy and paste from a requirements spreadsheet. What is important is to not to have to reenter requirements from the requirements document to the verification matrix as this opens the door for simple typing mistakes.
Figure 9-2 An example verification matrix for a switch module.
Wednesday, July 6, 2011
Verifying the Performance of a System Design
9 Processes and Tools for Verifying Technical Performance
9.0 Introduction
There are two approaches to verifying technical performance. One is using good engineering practices in all the systems engineering and design work to ensure that the defined requirements and the design meets customer expectations. The other is a formal verification process applied in two phases to hardware and software resulting from the design to verify that requirements are met. Both begin during requirements analysis and continue until a system is operational. The work is represented by the three arrows labeled verification in Figure 6-4 that constitute the backward part of the requirements loop, the design loop and the loop from design synthesis back to requirements analysis and includes both verifying completeness and accuracy of the design and verifying the technical performance of the system.
The first phase of formal system verification typically ends with delivery of a system to the customer but may include integration and testing with a higher level system of the customer or another supplier. The second phase of system verification is accomplished by testing the system in its intended environment and used by its intended users. This phase is typically called operational test and evaluation and is the responsibility of the customer for military systems but may involve the supplier for commercial systems and some NASA systems.
9.1 Verifying Design Completeness and Accuracy
Verifying the completeness and accuracy of the design is achieved by a collection of methods and practices rather than a single formal process. The methods and practices used by systems engineers include:
- System engineers checking their own work
- Checking small increments of work via peer reviews
- Conducting formal design reviews
- Using diagrams, graphs, tables and other models in place of text where feasible and augmenting necessary text with graphics to reduce ambiguity
- Using patterns vetted by senior systems engineers to help ensure completeness and accuracy of design documentation
- Developing and comparing the same design data in multiple formats, e.g. diagrams and matrices or diagrams and tables
- Verifying functional architecture by developing a full set of function and mode mapping matrices including:
- Customer defined functions to functions derived by development team (Some team derived functions are explicit in customer documentation and some are implicit.)
- Functions to functions for defining internal and external interfaces among functions
- Sub modes to functions for each of the system modes
- Mode and sub mode transition matrices defining allowable transitions between modes and between sub modes
- Using tools such as requirements management tools that facilitate verifying completeness and traceability of each requirement
- Using QFD and Kano diagrams to ensure completeness of requirements and identify relationships among requirements
- Using robust design techniques such as Taguchi Design of Experiments
- Iterating between requirements analysis and functional analysis and between design synthesis and functional analysis
- Employing models and simulations to both define requirements and verify that design approaches satisfy performance requirements
- Validating all models and simulations used in systems design before use (Note the DoD SEF describes verification, validation and accreditation of models and simulations. Here only verification and validation are discussed.)
- Employing sound guidelines in evaluating the maturity of technologies selected for system designs
- Maintaining a through risk management process throughout the development program
- Conducting failure modes and effects analysis (FMEA) and worse case analysis (WCA).
Engineers checking their own work are the first line of defense against human errors that can affect system design performance. One of the most important things that experienced engineers can teach young engineers is the importance of checking all work and the collection of methods for verifying their work that they have learned over their years in the engineering profession. Engineers are almost always working under time pressure and it takes disciple to take the time to check work at each step so that simple human mistakes don’t result in having to redo large portions of work. This is the same principle that is behind using peer reviews to catch mistakes early so that little rework is required rather than rely on catching mistakes at major design reviews where correcting mistakes often requires significant rework, with significant impact on schedule and budget.
A reason for presenting duplicate methods and tools for the same task in Chapter 6 was not just that different people prefer different methods but also to provide a means of checking the completeness and accuracy of work. The time it takes to develop and document a systems engineering product is a usually a small fraction of the program schedule so that taking time to generate a second version in a different format does not significantly impact schedule and is good insurance against incomplete or inaccurate work.
Pattern based systems engineering, QFD and Taguchi Design of Experiments (DOE) help ensure the completeness, accuracy and robustness of designs. Extensive experience has demonstrated the cost effectiveness of using these methods even though QFD and Taguchi DOE require users to have specialized training to be effective.
Studies have shown that selecting immature technologies result in large increases in costs (See http://www.dau.mil/conferences/presentations/2006_PEO_SYSCOM/tue/A2-Tues-Stuckey.pdf) Immature technologies often do not demonstrate expected performance in early use and can lead to shortfalls in the technical performance of designs. As a result both NASA and DoD include guidelines for selecting technologies in their acquisition regulations. Definitions of technology readiness levels used by NASA are widely used and are listed at http://esto.nasa.gov/files/TRL_definitions.pdf . Definitions are also provided in Supplement 2-A of the DoD SEF.
Failure analysis methods like FEMA and WCA are more often used by design engineers than systems engineers but systems engineers can include failure modes and worse case considerations when defining the criteria used in system design trade studies.
In summary, verifying technical performance during system development includes the disciplined use of good engineering practices as well as the formal performance verification process. This should not be surprising as these practices have evolved through experience specifically to ensure that system designs meet expected performance as well as other requirements.
Thursday, June 30, 2011
Who Leads System Design?
8.5 Design Oversight Responsibility
Although it is not the intent of this book to describe systems engineering management processes it is helpful to briefly describe how roles and responsibilities change during the system development phases. The first change in responsibility takes place when the baseline design is refined through trade studies to be the preferred design, e.g. a best value design, and the design requirements database is complete. The system development work is then at the end of the define requirements phase and ready to enter the design phase. This means the design requirements are complete to the level of responsibility of each of the lowest level IPTs, assuming the work is organized so that the lowest level IPT leaders are able to work in the “craftsman” model, i.e. the leader has the knowledge and experience to make all the design decisions for the work assigned to his/her IPT. At this point the individual IPTs take leadership responsibility from the SEIT or systems engineering and lead in determining how the system is designed and any prototypes built.
During the design phase systems engineering watches over the design to ensure requirements compliance, testability and producibility and monitors MOEs, progress on “ilities” and risk management. In addition, systems engineering is responsibility to manage any specification changes. If designers encounter difficulties in meeting an allocated requirement then systems engineers should take responsibility for determining if and how the requirement in question can be modified without jeopardizing overall requirements compliance. The systems engineering role during the design phase can be summarized as supporting the designers to ensure that a balanced and compliant design is achieved that is testable, producible and maintainable.
The transition from the system definition phase to the design phase is typically gated with a design review, e.g. a System Design Review (SDR). Program managers often wish to move systems engineers off a development project when the actions items from a SDR are complete. Although this is a time to increase the design specialty engineers and decrease the systems engineers on the IPTs removing too many systems engineers can leave the designers without adequate systems engineering support and cause other necessary tasks to be understaffed. It is better to assign the systems engineers to tasks like preparing system and subsystem integration and test plans and completing and maintaining the system design documentation.
When the design phase is compete and any prototypes are fabricated then systems resumes lead responsibility during test even though other specialty engineers may conduct the testing. Typically this change in leadership occurs sometime during integration and subsystem testing of an engineering model or prototype. These leadership responsibility transitions should occur naturally for IPTs with experienced systems and design engineers.
Tuesday, June 28, 2011
Some Useful Design Concept Diagrams
8.4 Diagrams Useful in Selecting the Preferred Design
As discussed previously much of systems engineering is determining relationships between a system and its environment and among the various subsystems. Ultimately these relationships are defined in detailed drawings but understanding the relationships in order to select the preferred design is aided by examining a system with different levels of abstraction. The modern tools used for electrical and optical design and the design practices of electrical and optical design engineers develop their respective design concepts with diagrams. The diagrams start with block diagrams with a high level of abstraction, perhaps just naming the subsystems, and proceed to greater and greater detail. This process makes it easy for systems engineers and other design engineers to readily understand the electrical and optical design concepts.
Although there are excellent modern design tools for mechanical and thermal design it isn’t as easy to present the mechanical and thermal designs with ever decreasing levels of abstraction so that system engineers and other design engineers can easily understand the mechanical and thermal designs. Experienced mechanical and thermal design engineers develop the desired diagrams and tailor their diagrams to the system being developed so that others can readily understand and assess their designs. A few examples are presented here to illustrate how experienced mechanical and thermal designers examine and communicate their design concepts. Note how easy it is to think of alternative design approaches when design concepts are presented in simple block diagram form with a high degree of abstraction. This enables engineers other than expert mechanical and thermal designers to assess design concepts and suggest design alternatives.
8.4.1 Simple Mechanical and Thermal Block Diagrams - Many times a simple block diagram is useful in describing a mechanical or a combined mechanical/thermal design concept. Figure 8-7 is an example showing a system that consists of five assemblies, a frame for mounting the assemblies and a mounting plate that supports the system with a three point mount.
Figure 8-7 A simple mechanical block diagram of a system illustrates how the assemblies interact with each other and the mounting frame.
It’s easy to see that the design concept is for each assembly to be coupled to the mounting frame and uncoupled from any other assembly. Four of the assemblies are temperature controlled whereas the fifth assembly is attached to the mounting frame but not within the temperature controlled region. By abstracting all of the size, shape, material and other characteristics of the system the basic mechanical and thermal relationships are easily understood and alternative concepts are obvious. The actual models that the mechanical designer uses to conduct trade studies of alternate mounting concepts are of course much more detailed and usually include the detailed characteristics of the various assemblies, frame and mounting plate; however, this detail is not necessary to explain the concepts and results of the trades to the system engineers and other designers.
Let’s suppose that three of the assemblies of the system shown in Figure 8-7 have critical alignment requirements. A perfectly good way to record and communicate the alignment requirements is with allocation trees. However, sometimes it makes the requirements clearer and possibly less prone to misunderstandings if a simple diagram is used in place of a tree. Such a diagram with the alignment requirements for each of the three assemblies and for the attachment points of each on the system mounting frame might look like Figure 8-8.
Figure 8-8 A simple block diagram illustrating the alignment requirements for three of assemblies and their respective interfaces with the mounting frame.
Similar simple block diagrams can be used to illustrate temperatures and heat flow paths. It sometimes makes design concepts easier to understand if mechanical and thermal diagrams are developed together. Examples are shown in Figures 8-9 and 8-10 that illustrate simple block diagrams of structural interfaces and thermal interfaces on similar block diagrams.
Figure 8-9 A block diagram illustrating structural interfaces within a system and between the system and its parent platform.
Figure 8-10 A block diagram illustrating thermal interfaces within a system and between the system and its environment using the same diagram approach as used for the structural interface diagram.
Even though it takes some time and care to develop diagrams such as shown in the examples above the benefits to the team in understanding and refining design concepts to reach the preferred design are well worth the effort. Diagrams like these and related simple diagrams for electrical, optical and other design concepts are invaluable in explaining a design concept to customers and management.
Subscribe to:
Posts (Atom)










