Search This Blog

Showing posts with label Requirements Analysis. Show all posts
Showing posts with label Requirements Analysis. Show all posts

Tuesday, May 10, 2011

Constructing the Basic House of Quality

7.4 Basic Matrix Structure
The QFD process is represented by a series of interconnecting matrices that establish the WHAT, the HOW's and the interrelationship of all parameters involved in the product development process. The QFD method is simply a disciplined way of deploying the voice of the customer (VOC) through each stage of the product cycle. The objective is to keep all efforts focused on the VOC requirements, to optimize cost and to minimize cycle time while being driven by the VOC.
The QT's are used in each product phase to communicate the knowledge developed to the next stage. In each stage translations take place to systematically bring the VOC to actions taken by functional organizations that result in a product/service that satisfies the customer. The purpose of the QT charts is to focus on answering three questions; WHAT, HOW and HOW MUCH. For each product stage and for each action taken in that stage these three questions must be addressed.
The “House of Quality”, sometimes referred to as the “Enhanced House of Quality” consists of multiple “rooms”.  Four of the “rooms” are lists that capture the, “What’s, How’s, Why’s and How Much’s” of the project.  Four additional “rooms” are formed by determining the correlation and relationships between these lists.  Figure 7-4 illustrates the basic structure and location of these “rooms”.  The following sections provide detail in forming the lists and relationships between these “rooms” that make up the “House of Quality”.  All four phases of the hierarchical matrices follow this basic structure and form.

Figure 7-4 The rooms and relationships of the house of quality.
7.4.1 Voice of the Customer (The “What’s”) - QFD starts with a list of objectives, or the WHATs that we want to accomplish. In the context of developing a new product this is a list of customer requirements and is often called the Voice of the Customer (VOC).  The items contained in this list are usually very general, vague and difficult to implement directly; they require further detailed definition. These vague needs are sometimes called “verbatims”, (e.g. easy to use, lasts long time, light weight, low power, easy to modify).
                       
Figure 7-5  The “what’s” defined by the VOC are often general statements.
One such item might be “easy to test”, which has a wide variety of meanings to different people. This is a highly desirable product feature, but is not directly actionable.
7.4.2 Transformation of Action - Once the list of WHAT’s is developed, each requires further definition. The list is refined into the next level of detail by listing one or more HOW's for each WHAT, (i.e. How are we going to satisfy the WHAT’s) as shown in Figure 7-6. This process can be further refined and expanded into a more detailed list of HOW’s.
                                   
Figure 7-6 The list of WHAT’s are transformed into a list of HOW’s
The objective of this refinement is to identify each actionable requirement - one that a clear action taken will satisfy a WHAT.
7.4.3 Handling Complex Relationships - The problem that is encountered is depicted in Figure 7-7. Many of the HOWs identified affect more than one WHAT. The approach to charting the `WHATs and `HOWs sequentially would become a maze of lines due to interrelationship that exist between the parameters

Figure 7-7 Many HOW’s affect more than one WHAT

7.4.4 Structuring the Relationships in a Matrix - The complexity of the sequential process is solved by creating a matrix with the HOW list across the top (horizontally) and the WHAT list vertical down the side of the matrix.  This determines the RELATIONSHIPS of the WHAT’s and HOW’s in a matrix where each intersect.  This is called a Correlation Matrix. Figure 7-8 illustrates by the use of a “X” where the What’s and How’s are interrelated.

Figure 7-8 A correlation matrix determines the relationships between the WHAT’s and HOW’s

7.4.5 Kinds of Relationships - The RELATIONSHIPS are the third key element of any QFD matrix and are depicted by placing symbols at the intersections of the WHATs and HOWs that are related. It is possible to depict the strength of the relationships by using different symbols. Commonly used symbols are shown in Figure 7-9.

Figure 7-9 Symbols used to show the strength of relationships.
This method allows very complex relationships to be depicted graphically and is easily interpreted as shown in Figure 7-10.

Figure 7-10 Strength symbols are placed in the matrix relating each WHAT to its respective HOW’s.
Throughout the QFD process there are repeatedly opportunities to cross check thinking, thus leading to better and more complete designs.  This technique of evolving plans into actions is useful for new product development as well as applications in business planning and systems design.
7.4.6 Target Values (How Much) - The fourth key element of any QFD chart is the HOW MUCH section. These are the measurements for the HOWs. These target values should represent what is necessary to satisfy the customer and may not be current performance levels.  Easy to test, when translated into detailed requirements may be measured in terms of the number of test points, requirement for component spacing, component edge clearance, etc. The component clearance would be a HOW and the HOW MUCH would be 0.020 inches minimum.  HOW MUCH's are needed for two reasons:
1.      To provide an objective means of assuring that requirements have been met.
2.      To provide targets for further detailed development
 Figure 7-11 The HOW MUCH’s are added in rows at the bottom of the matrix.

The HOW MUCH's, added to the matrix as shown in Figure 7-11, provide specific objectives that guide the subsequent design and afford a means of objectively assessing progress, minimizing ``opinion-eering ''.  The HOW MUCH's should be measurable as much as possible, because measurable items provide more opportunity for analysis and optimization than do non-measurable items.  This aspect provides another cross check on thinking. If most of the HOW MUCH's are not measurable then the definition of the HOW's are not detailed enough.  The HOW relationships that relate to the WHAT's become one means to check and measure to see if the WHAT requirements are being met. Viewed another way; meeting the target values will satisfy the HOW requirement.  If all of the HOW requirements are satisfied that are related to a VOC item by the relationship matrix then the VOC item is met. Therefore the focus can be now on meeting the target values and not be directly concerned with the VOC, it is taken care of by fulfillment of the HOW MUCH's.  These four key elements (WHAT, HOW, RELATIONSHIPS, HOW MUCH) form the foundation of QFD, and can be found on any QFD chart.

7.4.7 Correlation Matrix - The CORRELATION MATRIX is a triangular table often attached to the HOWs, establishing the correlation between each HOW item. The purpose of this roof-like structure is to identify areas where trade-off decisions, conflicts and research and development may be required. As in the RELATIONSHIP MATRIX, symbols are used to describe the strength of the relationships. The CORRELATION MATRIX also describes the type of relationship. The symbols commonly used are shown in Figure 7-12.

Figure 7-12 Symbols used to indicate correlation between pairs of HOW’s.

The correlation matrix identifies which of the HOWs support one another and which are in conflict.  Positive correlations are those in which one HOW supports another HOW. These are important because some resource efficiencies are gained by not duplicating efforts to attain same result. If an action adversely affects one HOW, it will have a degrading effect on the other. Negative correlations are those in which one HOW adversely affects the achievement of another how. These conflicts are extremely important; they represent conditions in which trade-offs are suggested. If there are no negative correlations there is probably an error. A well optimized product is almost always the result of some level of trade-off, which is expressed by a negative correlation.
Generally every HOW MUCH item has a desired direction. For example, POWER of 100 watts; generally driving it lower is better. A good test  for determining if a relationship is positive or negative is to ask the  question: "If power is driven towards its desired direction, are the other HOW's driven toward or away from their desired target values? If the HOW is driven towards its desired target value when power goes towards its desired target value then it is a POSITIVE RELATIONSHIP. If it is driven away from its desired target value then it is a NEGATIVE RELATIONSHIP."
Be cautious not to jump to a trade-off too quickly. The goal is to accomplish all of the HOW’s in order to satisfy customer requirements. The response to a negative correlation should be to seek a way to make the trade-off go away. This may require some degree of innovation or a research and development effort that may lead to a significant competitive advantage.




Figure 7-13 The correlation matrix is constructed on top of the HOW’s.
Frequently, negative correlations indicate conditions in which design and physics are in conflict. When this occurs physics always wins. Such trade-offs must be resolved. Trade-offs that are not identified and resolved often lead to unfulfilled requirements even though everyone has done their best. Some of the trade-offs may require high level decisions because they cross engineering group, department, divisional or company lines. Early resolution of these trade-offs is essential to shorten program timing and avoid nonproductive internal iterations while seeking a nonexistent solution.
Text Box:  Trade-off resolution is accomplished by adjusting the values of HOW MUCH's. These decisions are based on all the information normally available: business and engineering judgment as well as various analysis techniques. If trade-offs are to be made, they should be made in favor of the customer and not what is easiest for the company to perform.
7.4.8 Competitive Assessment - The COMPETITIVE ASSESSMENT is a pair of graphs that depict item for item how competitive products compare with current company products. This is done for the WHAT’s as well as the HOW’s. The COMPETITIVE ASSESSMENT of the WHAT’s is often called a Customer Competitive Assessment, and should utilize customer oriented information. It is extremely important to understand the customer's perception of a product relative to its competition.
The COMPETITIVE ASSESSMENT of the HOW’s is often called a Technical Competitive Assessment, and should utilize the best engineering talent to analyze competitive products. The COMPETITIVE ASSESSMENT can be useful in establishing the value of the objectives (HOW MUCH's) to be achieved.  This is done by selecting values which are competitive for each of the most important issues. The COMPETITIVE ASSESSMENT provides yet another way to cross check thinking and uncover gaps in engineering judgment. If the HOW’s are properly evolved from the WHAT’s, the COMPETITIVE ASSESSMENTs should be reasonably consistent.
WHAT and HOW items that are strongly related should also exhibit a relationship in the COMPETITIVE ASSESSMENT.  For example, if we believe superior dampening will result in an improved ride, the COMPETITIVE ASSESSMENT would be expected to show that products with superior dampening also have superior ride; as illustrated in Figure 7-14.
If this does not occur, it calls attention to the possibility that something significant may have been overlooked. If not acted upon, we may achieve superior performance to our “in house" tests and standards, but fail to achieve expected results in the hand of our customers.
The IMPORTANCE RATING is useful for prioritizing efforts and making trade-off decisions. Numerical tables or graphs will depict the relative importance of each WHAT or HOW to the desired end result. The WHAT IMPORTANCE RATING is established based on customer assessment. It is expressed as a relative scale (typically 1-5) with the higher numbers indicating greater importance to the customer. The importance ratings are listed in a column between the WHAT’s and the matrix. It is important that these values truly represent the customer, rather than internal company beliefs. Since we can only act from the HOW’s, importance ratings for these HOW’s are needed.


Figure 7-14 Competitive assessment of the WHAT’s are put in a box on the right side of the matrix.

7.4.9 Importance Ratings - Weights are assigned to the RELATIONSHIP symbols, e.g. the 9-3-1 weighting shown in Figure 7-15 achieves a good variance between important and less important items. Other weighting system may be used.  For each column (or HOW), the WHAT importance value is multiplied by the symbol weight, producing a value for each RELATIONSHIP. Summing these values vertically defines the HOW importance value. In Figure 7-16 the HOW importance rating for the first column is calculated in the following manner.  The double circle symbol weight (9) is multiplied by the WHAT importance value (5), forming a RELATIONSHIP value of 45. The next double circle symbol weight (9) is multiplied by the WHAT importance value (2), forming a RELATIONSHIP value of 18. These two values (45 + 18) form the HOW importance value of 63. This process is repeated for each column as shown in Figure 7-16.
 Figure 7-15 Importance ratings are obtained by assigning weights to the symbols in the relationship matrix


Figure 7-16 Importance ratings are calculated for each HOW as the sum of the weighted importance of each WHAT.
The IMPORTANCE RATING for the HOW’s provides a relative importance of each HOW in achieving the collective WHAT’s. We see that for the HOW’s listed; “Maximum Power” with a Target Value of 200 watts has the “HIGHEST” Relative Importance. Greater emphasis should be placed on the HOW with the 83 rating than the other HOW’s. It is important that we are not blindly driven by these numbers. The numbers are intended to help us, not constrain us. Look upon the numbers as further opportunities to cross check thinking. Question the relative values of the numbers in light of judgment. Is it reasonable that the HOW valued at 83 is the most important? Is it reasonable that the HOW’s with similar ratings are nearly equal in importance?

7.4.10 The Basic Matrix Structure - The previous section can now be integrated together into one chart. Figure 7-17 illustrates the Basic Matrix Structure. All of the matrices used in the product development stages could have these basic sections. Note the correlation matrix when added to the relationship matrix takes on the shape of a house with a roof. It is from this construction that the QFD matrices are termed the ̏houses of Quality''.

Figure 7-17 The basic form of the House of Quality relates the VOC and competitive assessment information to design requirements.

Tuesday, March 29, 2011

Methods for Verifying Functional Architecture

6.5.4 Verify the Functional Architecture
The functional architecture is the FFBDs and the allocated requirements. The collection of all documentation developed during the functional analysis and allocation task is called the functional view. The final task in defining the functional architecture is to review all of the functional view documentation for consistency and accuracy. Check the functions defined for each mode and sub mode to verify that no functions are missing and that the requirements allocated to each function are appropriate for each mode and sub mode. An example of a matrix of modes to functions useful for verifying that all top level functions needed for each sub mode of a toaster in its In Use Mode are defined properly is shown in Figure 6- 29. This example only examines one system mode but the process for examining all modes and all lower level functions is just an extension of the matrix.
The methodology of verifying functional design by using two different tools to describe the same functional requirements also applies to mode transitions. Figure 6-30 is an example of a matrix used to define the allowed transitions among modes of the In Use mode as previously defined with a mode transition diagram in Figure 6-11. Although this is a trivial example it illustrates the methodology.


Figures 6-29 An example of a Functions to System Modes matrix that facilitates verifying that all functions are defined for all modes.

Figure 6-30 Allowable mode transitions can be defined in a matrix as well as in a diagram.

Revisit the documentation in the operational view to verify that the functional architecture accounts for every function necessary to fulfill the operational requirements and that no unnecessary functions have been added. Verify that every top level performance and constraining requirement is flowed down, allocated and traceable to lower level requirements and that there are no lower level requirements that are not traceable to a top level requirement.

Wednesday, March 16, 2011

Allocating Performance Requirements

6.5.2 Allocate Performance and Other Limiting Requirements
It is important not to get caught up in the process of developing the various documents and diagrams and lose sight of the objective that is to develop a new system and that a primary responsibility of the systems engineers is to define complete and accurate requirements for the physical elements of the new system. Having decomposed the top level system modes into their constituent modes and the top level functions of the system into the lower level functions required for each of the decomposed modes the next step is to allocate (decompose) the performance and other constraining requirements that have been allocated to the top level functions to the lower level functions.
The primary path is to follow the FFBDs so that requirements are allocated for every function and are traceable back to the top level functional requirements. Traceability is supported by using the same numbering system used for the functions. Requirements Allocation Sheets may be used, as described in the DoD SEF, or the allocation can be done directly in whatever tool is used for the Requirements Database. Other useful tools are the scenarios, Timeline Analysis Sheets (TLS) and IDEFO diagrams developed during requirements analysis and functional decomposition. If the team followed recommended practice and began developing or updating applicable models and simulations these tools are used to improve the quality of allocated requirements. For example, budgeting the times for each function in a TLS on the results of simulations or models is certainly more accurate than having to estimate times or arbitrarily allocate times for each function so that the time requirement for a top level function is met.
Another example is any kind of sensor with a top level performance requirement expressed as a probability of detecting an event or sensing the presence or absence of something. This type of performance requirement implies that the sensor exhibit a signal to noise ratio in the test and operational environments specified. Achieving the required signal to noise ratio requires that every function in the FFBD from the function that describes acquiring the signal to the final function that reports or displays the processed signal meets or exceeds a level of performance. Analysis either by models or simulations is necessary to balance the required performance levels so that the top level performance is achieved with the required or desired margin without any lower level functions having to achieve performances that are at or beyond state of the art while other functions are allocated easily achievable performances.
Functional trees are very useful for budgets and allocations, particularly con-ops timelines and software budgets since physical elements don’t have time, lines of code (LOC) or memory requirements but functions do. Transforming the FFBD into an indented list of numbered and named functions on a spreadsheet facilitates constructing a number of useful tables and diagrams. Consider a timeline analysis sheet (TLS) for a hypothetical system having two functions decomposed as shown in Figure 6-24.

Figure 6-24 A hypothetical TLS for a system with two functions decomposed into its sub functions.
The TLS illustrates both the time it takes to execute each sub function in a particular con-ops scenario and the starting and stopping times for each time segment. If the functions were to be executed sequentially nose to tail then just the numerical time column would be needed and the total time would be determined by the sum of the individual times.
The same function list can be used for software budgets or allocations. An example is shown in Figure 6-25.


Figure 6-26 Software lines of code and memory can be budgeted or allocated to a list form of the system functions.

Tuesday, March 1, 2011

6.4 Functional Analysis and Allocation


Figure 6-5, the list of 15 tasks in the post of December 29 titled Requirements Analysis, shows that Functional Analysis and Allocation is necessary to accomplish subtask 10, Define Functional Requirements. Functional analysis decomposes each of the high level functions of a system identified in requirements analysis into sets of lower level functions. The performance requirements and any constraints associated with the high level functions are then allocated to these lower level functions. Thus the top level requirements are flowed down to lower levels requirements via functions. This decomposition and allocation process is repeated for each level of the system. The objective is to define the functional, performance and interface design requirements. The result is called the Functional architecture and the collection of documents and diagrams developed is the Functional view.  
A function is an action necessary to perform a task to fulfill a requirement. Functions have inputs and outputs and the actions are to transform the inputs to the outputs. Functions do not occupy volume, have mass, nor are they physically visible. Functions are defined by action verbs followed by nouns; for example, condition power or convert frequency. A complex system has a hierarchy of functions such that higher level functions can be decomposed into sets of lower level functions just a physical system can be decomposed into lower level physical elements. A higher level function is accomplished by performing a sequence of sub-functions. Criteria for defining functions include having simple interfaces, having a single function each, i.e. one verb and one noun; having independent operations and transparency, i.e. each does not need to know the internal conditions of the others. There are both explicit and implicit or derived functions. Explicit functions are those that are specified or decomposed from specified functions or performance requirements. Implicit or derived functions are those necessary to meet other specified capabilities or constraints.
Sometimes inexperienced engineers ask why they have to decompose and allocate  functions for design elements at the subsystem or lower levels; they believe once they know the top level function, the performance requirements and any constraints defined in the requirements analysis task they can design the hardware and software without formal functional analysis/allocation. One simple answer is that items like time budgets and software lines of code are not definable from hardware or software; they are defined from the decomposed functions and sequences of functions. This is because hardware or software does not have time dimensions, only the functions the hardware or software performs have time attributes. Similarly the question of how many lines of code or bits of memory only have meaning in terms of the functions the software code is executing. Other reasons become more apparent as we describe functional design and design synthesis.
A diagram, simplified from Figure 6-4 and shown in Figure 6-19, helps understand both the question asked by inexperienced engineers and the answer to their question.

Figure 6-19 Developing a hardware/software system design is iterative and has multiple paths.

Figure 6-19 illustrates that although the primary design path is ultimately from requirements to hardware/software design; primary because when the design is complete the resulting hardware/software design fulfills the requirements, other paths are necessary and iteration on these paths is necessary to achieve a balanced design. The paths between requirements and functional design and between functional design and hardware/software design are necessary to:
  • Validate functional behavior
  • Plan modeling and simulation
  • Optimize physical partitioning
  • Facilitate specification writing
  • Facilitate failure analysis
  • Facilitate cost analysis and design to cost efforts
  • Facilitate concept selection
  • Define software budgets and CON-OPS timelines.
Although the path from requirements analysis to design synthesis isn’t formally shown in Figure 6-4 it is used when a team elects to design a product around a particular part, such as a state of the art digital processor or new and novel signal processing integrated circuit chip. However, having preselected a part doesn’t negate the need to define the functions performed by the part and verify that the performance requirements and interfaces are satisfied.
6.4.1 Decompose to Lower-level Functions – Decomposition is accomplished by first arranging the top level functions in a logical sequence and then decomposing each top level function into the logical sequence of lower level functions necessary to accomplish the top level functions. Sometimes there is more than one “logical” sequence. It is important to examine the decomposed functions and group or partition them in groups that are related logically. This makes the task of allocating functions to physical elements easier and leads to a better design, as will be explained in a later section. When more than one grouping is logical then trade studies are needed. Although the intent is not to allocate functions to physical entities at this point functions should not be grouped together if they obviously belong to very different physical elements. The initial grouping should be revisited during the later task of allocating functions to physical elements and this process is described in more detail in a following section.
The DoD SEF has an excellent description of functional analysis/allocation and defines tools used to include Functional Flow Block Diagrams (FFBD), Time Line Analysis, and the Requirements Allocation Sheet. N-Squared diagrams may be used to define interfaces and their relationships. Spreadsheets are also useful tools and are used later in various matrices developed for validation and verification. A spreadsheet is the preferred tool for developing a functions dictionary containing the function number, the function name (verb plus noun) and the detailed definition of each function and its sub functions. The list of function numbers and names in the first two columns of the functions dictionary can be copied into new sheets for the matrices to be developed in the design synthesis task.
Spreadsheets do not lend themselves to identifying the internal and external interfaces as well as FFBDs so the FFBD is the preferred tool for decomposition. Time Line Analysis and the Requirements Allocation Sheet are well described in Supplement 5 of the DoD SEF and need no further discussion here. Similarly the Integration Definition for Function Modeling (IDEFO) is a process-oriented model for showing data flow, system control, and the functional flow of life cycle processes. It is also well described in Supplement 5 of the DoD SEF. The collection of documents, diagrams and models developed using all of the tools is the Functional view. The functional architecture is the FFBDs and time line analyzes that describe the system in terms of functions and performance parameters.
Although the FFBD is discussed in the DoD SEF there are some additional important aspects of this tool that are covered in the next posting.

Tuesday, February 15, 2011

Processes for Technical Performance Measures (TPM)

6.3.3 Define Performance and Design Constraint Requirements
Tasks 11, 13 and 14 shown in Figure 6-5, posted Wed. Dec. 29, 2010, are related to performance and design characteristics and are discussed together here. During the analysis of market data, customer documentation and user needs, and in constructing P-Diagrams and Quality Function Deployment (QFD) QT-1 tables the top level system performance parameters and key design characteristics such as size, mass, power and data rates are identified.  These parameters must be quantified and metrics must be defined that enable the team, management and customers to track progress toward meeting these important requirements. It is also necessary to judge which design characteristics are rigid constraints and which can be modified in design trade studies if necessary.
6.3.3.1 Define Tracking Metrics (KPPs & TPMs) Developing a consistent process for constructing and updating KPPs, TPMs and any other tracking metrics needed is essential to providing confidence to both the team and their managers and customers that work is progressing satisfactorily or if critical requirements are at risk. It is good practice, as defined by IEEE task 11, to begin the selection of tracking metrics and setting up tracking procedures during the requirements analysis task even though the design work is not yet dealing with physical design. One widely used process is presented here. Only TPMs are discussed but the process can be used for other metrics as well.  The benefits of TPMs include:
  • Keeping focus on the most important requirements (often called Cardinal Requirements).
  • Providing joint project team, management and customers reliable visibility on progress.
  • Showing current and historical performance Vs. specified performance
  • Providing early detection and prediction of problems via trends
  • Being a key part of a design margin management process
  • Monitoring identified Risk areas
  • Documenting key technical assumptions and direction

The steps in generating TPMs include:
  1. Identify critical parameters to trend (e.g., from a QT-1)
  2. Identify specification & target values and the design margin for each parameter
  3. Determine the frequency for tracking and reporting the TPMs (e.g. monthly)
  4. Determine the current parameter performance values by modeling, analysis, estimation or measurement (or combination of these)
  5. Plot the TPMs as a function of time throughout the development process
  6. Emphasize trends over individual estimates
  7. Analyze trends and take appropriate action  

Some systems have a very large number of TPMs and KPPs. Rather than report every metric in such cases develop a summary metric that reports the number or percent of metrics within control limits and having satisfactory trends. Metrics not within control limits or having unsatisfactory trends are presented and analyzed for appropriate actions. This approach works well if those reviewing the metrics trust the project team’s process.
It is recommended to track and report both margin and contingency with the current best estimate (CBE). Unfortunately there are no standard definitions for these parameters. Example definitions are shown in Figure 6-13.

Figure 6-13 Example definitions for margin and contingency where contingency is defined as the current best estimate plus/minus one sigma.

An example of a TPM tracked monthly for a characteristic for which smaller-is-better, e.g. mass or power, is shown in Figure 6-14 with the plotted points being the CBE plus contingency.

Figure 6-14 An example TPM chart for a characteristic for which smaller-is-better.

Included in the chart is the specification value, shown as a horizontal line, a target value selected by the project team and plotted as a point at the planned end of the development cycle, a margin depletion line, upper and lower control limits as well as the plotted values of CBE plus a contingency. Selecting the target value, slope of the margin depletion line and the control limits is based on the judgment of the project team.
Smaller-is-better characteristics like mass and power consumption tend to increase during the development so initial values should be well under the specification value. Thus a reasonable strategy for selecting the margin depletion line is to link the initial estimate with the final target value. Control limits are selected to provide guidance for when design action is needed. If the characteristic exceeds the upper control limit design modifications may be needed to bring the characteristic back within the control limits. If the characteristic is below the lower control limit then the team should look for ways that the design can be improved by allowing the characteristic to increase toward the margin depletion line value at that time. For example, in Figure 6-14 the first three estimates trend to a point above the upper control limit. The fourth point suggests that the project team has taken a design action to bring the characteristic back close to the margin depletion line.
Many requirements must be broken down into separate components, attributable to each contributing factor; e.g. mass of a system can be broken down into the mass for each subsystem. TPMs for such requirements are a rollup of the contributing components.  A variation on a TPM particularly useful for such requirements is to explicitly include the basis of estimate (BOE) methods of arriving at the CBE value each period. Typical BOE Methods are allocation, estimation, modeling and measurement. Allocation is the process of taking a top-level roll-up and budgeting a specified amount to each contributing component. Estimation, modeling and measurement are methods applied to each contributing component and then combining them to determine the result for the top level requirement. Figure 6-15 is an example of a smaller-is-better characteristic with the CBE shown as a stacked bar with the proportion of the CBE from each method explicitly shown.

Figure 6-15 An example of a TPM chart where the plotted values include the basis of estimate methods to provide more information about the confidence in reported values.

The proportions allocated and estimated drop as modeling and measurements are made, which increases the confidence in the reported CBE value. In the example shown the CBE is the sum of values for each contributing component with different uncertainties and the plotted values, i.e. the top of the stacked bar are the CBE plus the RMS of the uncertainties, i.e. the contingency in this case. For a larger-is-better characteristic the plotted value is the CBE minus the RMS of the uncertainties assuming the definitions in Figure 6-13.
One alternative method of tracking and reporting metrics is using spreadsheet matrices rather than charts. This approach allows the values for each contributing component to be tracked and reported. An example for a smaller-is-better characteristic attributable to three component subsystems is shown in Figure 6-16.

Figure 6-16 An example TPM for a smaller-is-better characteristic attributable to subsystems.

The total allocated value can be either the specification value or the specification value minus a top level margin, i.e. a target value. The BOE methods for each contributing component CBE can be included in additional columns if desired. The disadvantage of a matrix format compared to a time chart is that it isn’t as easy to show trends with time, margin depletion line value and control limits. Matrices are useful for requirements that involve dozens of contributing components; each important enough to be tracked. The detail can be presented in a matrix with color coding to highlight problem areas and either a summary line or summary chart used for presentations to management or customers.
A third alternative for tracking metrics is a tree chart. An example using the data from Figure 6-16 as mass in Kg is shown in Figure 6-17.

Figure 6-17 An example of a TPM in a tree chart format.

A tree chart has similar disadvantages as a matrix compared to a time chart but both trees and matrices have the advantage of showing the values for contributing components. Sometimes it is valuable to use both a time chart and a tree or matrix for very important TPSs or KPPs.
Common sense guidelines to follow in managing tracking metrics include:
  • Keep metrics accurate and up to date
  • Be flexible with metric categories (add, delete, modify as development progresses)
  • Dovetail into customer process
  • Plan ahead to achieve maturing Basis Of Estimates (BOE)
  • Make performance prediction part of normal design process (flow-up, not run-down)
  • Include in program status reports, e.g. monthly.
It should be part of each IPT’s normal work to report or post to a common database the estimates for metrics under their design responsibility. This makes it easy for one of the system engineers to compile up to date and accurate metrics for each reporting period. If this is not standard practice then someone has to run down each person responsible for part of a metric each period to obtain their input; a time consuming and distasteful practice.

Monday, February 7, 2011

6.3.2.3 Kano Diagrams

A Kano diagram is an example of a tool that takes very little of a team’s time but sometimes has a huge payoff. A Kano diagram categorizes system characteristics in the three categories of Must Be, More is Better and Delighters. Each category has a particular behavior on a plot of customer response vs. the degree to which the characteristic is fulfilled. An example Kano diagram is shown in Figure 6-12. The customer response ranges from dissatisfied to neutral to delight. Characteristics that fit the More is Better category fall on a line from dissatisfied to delight depending on the degree to which the characteristic is fulfilled. Characteristics that are classified as Must Be are those the customer expects even if the customer didn’t explicitly request the characteristic. Thus these characteristics can never achieve a customer response above neutral. Characteristics that fit the Delighter category are usually characteristics the customer didn’t specify or perhaps didn’t even know were available. Such characteristics produce a response greater than neutral even if only slightly present. Obviously characteristics can be displayed in a three column list as well as a diagram.
The reasons to construct a Kano diagram are first to ensure that no Must Be characteristics are forgotten and second to see if any Delighters can be identified that can  be provided with little or no impact on cost or performance. Kano diagrams are not intended to be discussed with customers but to assist the development team in defining the best value concept. The “Trend with Time” arrow on this diagram is not part of a Kano diagram but is there to show that over time characteristics move from Delighters, to More is Better to Must Be’s. The usual example used to illustrate this trend with time is cup holders in automobiles. They were Delighters when they first appeared, then they became More is Better and now they are Must Be’s. The “characteristics” included in a Kano diagram may be functions, physical design characteristics, human  factors, levels of performance, interfaces or modes of operation. Thus this simple tool may contribute to many of the 15 IEEE tasks defined in Figure 6-5 in a previous post.


Figure 6-12 A simple example of a Kano diagram that classifies system characteristics into categories of Must Be, More is Better, and Delighters.


Wednesday, January 19, 2011

Parameter Diagrams Help Define Functional Requirements

6.3.2 Define Functional Requirements
The context diagram and the concept of operations provide an understanding of the static and dynamic interactions of the system with the external world. Identifying functional requirements requires that we begin to examine what the system does. This first involves looking at the system as a complete entity and then decomposing top level functions to lower level functions. It includes examining the modes that a system exists in throughout its life cycle. These analyzes set the stage for allocating functions to physical elements; the work of the design loop. A useful tool for examining function for the system as a single entity is the Parameter Diagram.
6.3.2.1 Parameter Diagrams - The Parameter Diagram (P-diagram) supports requirements analysis by determining and communicating the function of the system or any design element, defining those parameters that can cause undesired performance (noise factors) and defining the control factors the element has (or should have) whose values are selected to achieve the desired performance of the ideal function and to minimize the sensitivity to the noise factors. P-diagrams are useful at any level of product design, from mission level to part level. The form of a P-diagram is shown in Figure 6-9.

Figure 6-9 A P-diagram defines the ideal function of a design element and lists the noise factors and control factors that must be controlled in the design to achieve as close as practical the ideal function.

Defining the ideal input and the control factors is usually straight forward. The real work comes in defining the ideal response of a design element, the undesired responses and the noise factors that cause the undesired response(s). The design element, labeled “system” in the figure, is treated as a single entity and the control factors selected as appropriate for the level of the design element. Thus, if the design element is at a mission level the control factors relate to the various systems that interact to fulfill the mission. It isn’t helpful to list screw thread size as a control factor for a mission or complex system design element but screw thread size may be a control factor for a design element at the component level.
Consider the P-diagram for a pencil. The ideal response to the input of hand motion is a line on a writing surface of a uniform density and width. However, a pencil has another ideal response that is important to users. It should sharpen to a fine, uniform point in response to being sharpened. Undesired responses include skipping, tearing paper and the point breaking during sharpening. Control factors include parameters like graphite composition, hardness and diameter and the type of wood used for the casing. Noise factors include writing surface roughness, writing surface material, type of pencil sharpener, sharpness/dullness of pencil sharpener blade(s), temperature, humidity and aging. These lists aren’t meant to be complete but to illustrate the principles involved. Note that variations in composition or trace impurities are not listed as noise factors since the purity and composition are controllable. Sometimes it is difficult to decide whether a parameter is a noise or control factor. It’s best not to waste time trying to be to pure with definitions; capture the parameter as one or the other and ensure it is taken into account during the design.
Generating the P-diagram is a simple and convenient method for the engineer, whether a system engineer or a design engineer, to think about the effects of the environments the design element faces in its life of use, the types and influence of other elements it interacts with in test, use and storage; the ways in which the element is used (intended and otherwise) and the expectations of users. Thus the P-diagram complements the context diagram and the concept of operations and thereby supports requirements analysis as well as functional analysis.
The reason for listing the undesired responses of a design element is that the preferred method of controlling undesired responses is to optimize the ideal response in the presence of the noise factors. This is the essence of robust design practice discussed in the books of Clausing and Phadke previously cited. Design practices prior to robust design often attempted to reduce the undesirable responses directly. Directly minimizing one undesired response can result in redirecting the energy causing that undesirable response so that it causes another undesired response. In contrast robust design practices maximize the ideal response in the presence of the noise factors so that all the undesired responses are controlled.
One of the benefits of developing a P-diagram is that it defines the parameters to be considered in robust design. Another is its communication value. The P-diagram communicates essential parts of the design problem to both the design team and to any reviewers of the design work. If the design element is of the same family as something that the team has designed before then it takes little time to edit the pattern P-diagram from a previous design by adding any new factors and deleting any factors that don’t apply to the new design element. It is likely that the ideal inputs, ideal responses and undesired responses are very similar and take only minutes to edit. However, it is the differences between the new design element and those the team has designed before that are important to define at the outset. The P-diagram is a quick way to define and capture those differences.
If the design element is entirely new to the team then the P-diagram is critical to identifying all of the noise factors that must be accounted for in the design. Often the effects of noise factors on the performance of a design element are analyzed individually or in groups. Color coding the noise factors on a P-diagram is a quick way to remind reviewers of which noise factors are under consideration in a particular analysis, which ones have been previously evaluated and which ones remain to be analyzed. Using robust design practices such as Taguchi design of experiments noise factors can be grouped or compounded to speed the analysis. The P-diagram is again a good communication tool to describe the structure of a Taguchi analysis of variance experiment. (Taguchi design of experiments is described in detail in the book by Phadke cited earlier.)
 Examining the interactions on the context diagram leads to the identification of the functions performed by the system in addition to the ideal function and undesired interactions defined in the P-diagram. Each interaction on the context diagram is reviewed to define the inputs and outputs from the system. The transformations performed on the inputs, and driving the outputs establish the functions performed by the system. Functions should be defined following the classic engineering view that outputs are defined as a transform on one or more inputs. When defining system functions, the requirements must define every output that is produced from every combination of the legal set of inputs. The definition of the function transformations must be comprehensive and accurate. The functions must provide exhaustive definitions. Requirements often are not exhaustive by not covering all possible outputs that can be produced.
The concept of operations (Con-Ops) does not provide an exhaustive understanding of all operations of the system, so the identification of all functions must rely on the context and P-diagrams. Additionally, interactions on the context diagram may be involved in multiple functions. Involvement of the user in the function definition provides confidence that all system functions are identified. Con-Ops and P-Diagrams typically focus on the functions which produce the target outputs which the user desires from the system. However, there are often other functions that must be performed to support the operation of the system but that don’t directly produce the target outputs desired by the user. For example, data systems typically employ error correction algorithms to ensure the desired data performance is achieved. Users typically do not request an error correction function as part of the system. The system engineer must provide the broad vision, based on his understanding of the user needs, to recognize this is a necessary function which the system must perform in order to ensure the algorithm performance requirements are specified. This highlights an important value of the context diagram. Examining all interactions of the system with the external world usually identifies functions the system must perform that are not typically identified as requirements by the user.
 When examining the interactions on the context diagram, it is also important to remember that some of the interactions will map to a real physical interface. Data flowing in and out of the system usually represents a physical interface. However, some interactions, such as radiated EMI, are not mapped to a physical interface. It would be a lot easier to design a system if the radiated EMI interactions could be allocated to a specific connector but in this case, the functions need to be specified with the understanding that many mechanical characteristics have requirements that address this function. Functions that do not deal with explicit physical interfaces are often difficult to specify.
The detailed understanding of system interactions is translated into the final functional requirements. For each function, each individual parameter for performance, behavior, and visibility becomes an independent, unique requirement. Having identified functions, detailed understanding ultimately leads to functional requirements. The specifics of the performance, behavior, and visibility associated with the input, transform, and output must be included. Performance aspects include tolerances, uncertainties, response times, bandwidth, data structures, etc. Behavior includes considerations of how functions are affected by the operating state of the system, other functional results, and environmental conditions. Visibility defines how much knowledge is known about how the function is performed. For example, an error correction function may be required by the system. This error correction function may be defined simply by the ability to perform a specified level of error correction, or the system may need to implement a specific algorithm.
When all functions are defined, the customer must review and agree on the result at a System Requirements Review (SRR) or System Functional Review (SFR). Other tools and methods useful in identifying functional requirements include Quality Function Deployment (QFD), defining system modes and Kano diagrams. These are described in the following sections or chapters.