Tuesday, February 22, 2011
184.108.40.206 Spider Diagrams - Another useful diagram is the spider diagram; named from its shape. Spider charts are useful for tracking progress of requirements that are interrelated so that careful attention is needed to avoid improving one at the expense of others. An “ility” diagram is a good example of a spider chart that tracks progress toward goals or requirements for selected ilities; i.e. reliability, maintainability, reparability, etc. An ility diagram treats the system or product as a single entity. It is constructed by selecting and ranking in importance eight ilities as described by Bart Huthwaite in his book “Strategic Design” cited earlier. Initially measures and goals are established for each ility. Later, as the system design takes form initial estimates or calculations of each ility are made and plotted on the diagram. As system design trades progress and the design is refined the progress toward goals in tracked. By including the most important eight ilities and tracking their measures on a chart the design team has a clear picture of how well the design is progressing and if any ility is being sacrificed for the benefit of some other ility or performance parameter. An example ility chart as it might appear in the middle of system trades is shown in Figure 6-18. Here the eight ilities are numbered in rank order of importance so that if tradeoffs cannot be avoided the most important ilities are favored over less important one. The line connecting the eight legs is the CBE for each ility. Trends can be indicated by keeping the previous three or four CBEs.
Figure 6-18 An example of the form of an “ility” chart that tracks progress toward goals for eight ilities ranked in importance from 1 to 8.
6.3.4 Balancing Customer Needs
A major goal of requirement analysis is to ensure the set of requirements are complete, accurate, and non-redundant. An additional activity during the requirement analysis is to identify opportunities to optimize the customer’s desires and needs. This task is considered part of the Technical Management task shown in Figure 6-4 in a previous post. Working with the customer to adjust requirements often presents an opportunity to save cost or schedule with little impact on the customer. This may also provide significant improvements in a highly desired performance parameter with little impact on a lesser desired parameter. These changes are best identified during requirement analysis. As the program moves into the architecture, then design phase, changes become more costly providing less opportunity for savings. The understanding that a very large percentage of the program cost and schedule are established very early in the program is an important responsibility of the system engineer. Requirements drive the system architecture trades, and once the architecture is defined, the opportunity for savings is a small percentage of the opportunity early in the program. Pugh diagrams (to be described in Chapter 8) and QFD tables (defined in Chapter 7) are valuable tools for assessing these opportunities.
Tuesday, February 15, 2011
6.3.3 Define Performance and Design Constraint Requirements
Tasks 11, 13 and 14 shown in Figure 6-5, posted Wed. Dec. 29, 2010, are related to performance and design characteristics and are discussed together here. During the analysis of market data, customer documentation and user needs, and in constructing P-Diagrams and Quality Function Deployment (QFD) QT-1 tables the top level system performance parameters and key design characteristics such as size, mass, power and data rates are identified. These parameters must be quantified and metrics must be defined that enable the team, management and customers to track progress toward meeting these important requirements. It is also necessary to judge which design characteristics are rigid constraints and which can be modified in design trade studies if necessary.
220.127.116.11 Define Tracking Metrics (KPPs & TPMs) Developing a consistent process for constructing and updating KPPs, TPMs and any other tracking metrics needed is essential to providing confidence to both the team and their managers and customers that work is progressing satisfactorily or if critical requirements are at risk. It is good practice, as defined by IEEE task 11, to begin the selection of tracking metrics and setting up tracking procedures during the requirements analysis task even though the design work is not yet dealing with physical design. One widely used process is presented here. Only TPMs are discussed but the process can be used for other metrics as well. The benefits of TPMs include:
- Keeping focus on the most important requirements (often called Cardinal Requirements).
- Providing joint project team, management and customers reliable visibility on progress.
- Showing current and historical performance Vs. specified performance
- Providing early detection and prediction of problems via trends
- Being a key part of a design margin management process
- Monitoring identified Risk areas
- Documenting key technical assumptions and direction
The steps in generating TPMs include:
- Identify critical parameters to trend (e.g., from a QT-1)
- Identify specification & target values and the design margin for each parameter
- Determine the frequency for tracking and reporting the TPMs (e.g. monthly)
- Determine the current parameter performance values by modeling, analysis, estimation or measurement (or combination of these)
- Plot the TPMs as a function of time throughout the development process
- Emphasize trends over individual estimates
- Analyze trends and take appropriate action
Some systems have a very large number of TPMs and KPPs. Rather than report every metric in such cases develop a summary metric that reports the number or percent of metrics within control limits and having satisfactory trends. Metrics not within control limits or having unsatisfactory trends are presented and analyzed for appropriate actions. This approach works well if those reviewing the metrics trust the project team’s process.
It is recommended to track and report both margin and contingency with the current best estimate (CBE). Unfortunately there are no standard definitions for these parameters. Example definitions are shown in Figure 6-13.
Figure 6-13 Example definitions for margin and contingency where contingency is defined as the current best estimate plus/minus one sigma.
An example of a TPM tracked monthly for a characteristic for which smaller-is-better, e.g. mass or power, is shown in Figure 6-14 with the plotted points being the CBE plus contingency.
Figure 6-14 An example TPM chart for a characteristic for which smaller-is-better.
Included in the chart is the specification value, shown as a horizontal line, a target value selected by the project team and plotted as a point at the planned end of the development cycle, a margin depletion line, upper and lower control limits as well as the plotted values of CBE plus a contingency. Selecting the target value, slope of the margin depletion line and the control limits is based on the judgment of the project team.
Smaller-is-better characteristics like mass and power consumption tend to increase during the development so initial values should be well under the specification value. Thus a reasonable strategy for selecting the margin depletion line is to link the initial estimate with the final target value. Control limits are selected to provide guidance for when design action is needed. If the characteristic exceeds the upper control limit design modifications may be needed to bring the characteristic back within the control limits. If the characteristic is below the lower control limit then the team should look for ways that the design can be improved by allowing the characteristic to increase toward the margin depletion line value at that time. For example, in Figure 6-14 the first three estimates trend to a point above the upper control limit. The fourth point suggests that the project team has taken a design action to bring the characteristic back close to the margin depletion line.
Many requirements must be broken down into separate components, attributable to each contributing factor; e.g. mass of a system can be broken down into the mass for each subsystem. TPMs for such requirements are a rollup of the contributing components. A variation on a TPM particularly useful for such requirements is to explicitly include the basis of estimate (BOE) methods of arriving at the CBE value each period. Typical BOE Methods are allocation, estimation, modeling and measurement. Allocation is the process of taking a top-level roll-up and budgeting a specified amount to each contributing component. Estimation, modeling and measurement are methods applied to each contributing component and then combining them to determine the result for the top level requirement. Figure 6-15 is an example of a smaller-is-better characteristic with the CBE shown as a stacked bar with the proportion of the CBE from each method explicitly shown.
Figure 6-15 An example of a TPM chart where the plotted values include the basis of estimate methods to provide more information about the confidence in reported values.
The proportions allocated and estimated drop as modeling and measurements are made, which increases the confidence in the reported
CBE value. In the example shown the CBE is the sum of values for each contributing component with different uncertainties and the plotted values, i.e. the top of the stacked bar are the CBE plus the RMS of the uncertainties, i.e. the contingency in this case. For a larger-is-better characteristic the plotted value is the CBE minus the RMS of the uncertainties assuming the definitions in Figure 6-13.
One alternative method of tracking and reporting metrics is using spreadsheet matrices rather than charts. This approach allows the values for each contributing component to be tracked and reported. An example for a smaller-is-better characteristic attributable to three component subsystems is shown in Figure 6-16.
Figure 6-16 An example TPM for a smaller-is-better characteristic attributable to subsystems.
The total allocated value can be either the specification value or the specification value minus a top level margin, i.e. a target value. The BOE methods for each contributing component CBE can be included in additional columns if desired. The disadvantage of a matrix format compared to a time chart is that it isn’t as easy to show trends with time, margin depletion line value and control limits. Matrices are useful for requirements that involve dozens of contributing components; each important enough to be tracked. The detail can be presented in a matrix with color coding to highlight problem areas and either a summary line or summary chart used for presentations to management or customers.
A third alternative for tracking metrics is a tree chart. An example using the data from Figure 6-16 as mass in Kg is shown in Figure 6-17.
Figure 6-17 An example of a TPM in a tree chart format.
A tree chart has similar disadvantages as a matrix compared to a time chart but both trees and matrices have the advantage of showing the values for contributing components. Sometimes it is valuable to use both a time chart and a tree or matrix for very important TPSs or KPPs.
Common sense guidelines to follow in managing tracking metrics include:
- Keep metrics accurate and up to date
- Be flexible with metric categories (add, delete, modify as development progresses)
- Dovetail into customer process
- Plan ahead to achieve maturing Basis Of Estimates (BOE)
- Make performance prediction part of normal design process (flow-up, not run-down)
- Include in program status reports, e.g. monthly.
It should be part of each IPT’s normal work to report or post to a common database the estimates for metrics under their design responsibility. This makes it easy for one of the system engineers to compile up to date and accurate metrics for each reporting period. If this is not standard practice then someone has to run down each person responsible for part of a metric each period to obtain their input; a time consuming and distasteful practice.
Monday, February 7, 2011
A Kano diagram is an example of a tool that takes very little of a team’s time but sometimes has a huge payoff. A Kano diagram categorizes system characteristics in the three categories of Must Be, More is Better and Delighters. Each category has a particular behavior on a plot of customer response vs. the degree to which the characteristic is fulfilled. An example Kano diagram is shown in Figure 6-12. The customer response ranges from dissatisfied to neutral to delight. Characteristics that fit the More is Better category fall on a line from dissatisfied to delight depending on the degree to which the characteristic is fulfilled. Characteristics that are classified as Must Be are those the customer expects even if the customer didn’t explicitly request the characteristic. Thus these characteristics can never achieve a customer response above neutral. Characteristics that fit the Delighter category are usually characteristics the customer didn’t specify or perhaps didn’t even know were available. Such characteristics produce a response greater than neutral even if only slightly present. Obviously characteristics can be displayed in a three column list as well as a diagram.
The reasons to construct a Kano diagram are first to ensure that no Must Be characteristics are forgotten and second to see if any Delighters can be identified that can be provided with little or no impact on cost or performance. Kano diagrams are not intended to be discussed with customers but to assist the development team in defining the best value concept. The “Trend with Time” arrow on this diagram is not part of a Kano diagram but is there to show that over time characteristics move from Delighters, to More is Better to Must Be’s. The usual example used to illustrate this trend with time is cup holders in automobiles. They were Delighters when they first appeared, then they became More is Better and now they are Must Be’s. The “characteristics” included in a Kano diagram may be functions, physical design characteristics, human factors, levels of performance, interfaces or modes of operation. Thus this simple tool may contribute to many of the 15 IEEE tasks defined in Figure 6-5 in a previous post.
Figure 6-12 A simple example of a Kano diagram that classifies system characteristics into categories of Must Be, More is Better, and Delighters.