Search This Blog

Tuesday, March 29, 2011

Methods for Verifying Functional Architecture

6.5.4 Verify the Functional Architecture
The functional architecture is the FFBDs and the allocated requirements. The collection of all documentation developed during the functional analysis and allocation task is called the functional view. The final task in defining the functional architecture is to review all of the functional view documentation for consistency and accuracy. Check the functions defined for each mode and sub mode to verify that no functions are missing and that the requirements allocated to each function are appropriate for each mode and sub mode. An example of a matrix of modes to functions useful for verifying that all top level functions needed for each sub mode of a toaster in its In Use Mode are defined properly is shown in Figure 6- 29. This example only examines one system mode but the process for examining all modes and all lower level functions is just an extension of the matrix.
The methodology of verifying functional design by using two different tools to describe the same functional requirements also applies to mode transitions. Figure 6-30 is an example of a matrix used to define the allowed transitions among modes of the In Use mode as previously defined with a mode transition diagram in Figure 6-11. Although this is a trivial example it illustrates the methodology.


Figures 6-29 An example of a Functions to System Modes matrix that facilitates verifying that all functions are defined for all modes.

Figure 6-30 Allowable mode transitions can be defined in a matrix as well as in a diagram.

Revisit the documentation in the operational view to verify that the functional architecture accounts for every function necessary to fulfill the operational requirements and that no unnecessary functions have been added. Verify that every top level performance and constraining requirement is flowed down, allocated and traceable to lower level requirements and that there are no lower level requirements that are not traceable to a top level requirement.

Tuesday, March 22, 2011

Tools for Defining and Verifying Functional Interfaces

6.5.3 Define and Verify Functional Interfaces (Internal/External)
Logical interfaces with external elements are defined in the context diagram and the FFBDs and internal interfaces are defined in the FFBDs. Both types of interfaces must be analyzed to verify that all interfaces are properly located and defined. Examine each external interface and verify that the information coming from or going to the interface matches the information being handled by the parent function in the chain of lower level child functions. Similarly examine each function and verify that all information coming from or going to the function is accounted for; that no function has an output that doesn’t go to either another function or to an external logical interface; and that no function requires information that is not coming to the function from another function or external interface. This task is made easier if the links in a process-oriented FFBD are labeled. An example of a simple process-oriented FFBD of a toaster with internal and external interfaces is shown in Figure 6-26.
(Apologies to experienced designers of toasters for any mistakes by the authors who have limited domain knowledge of toaster design. We use the example of a toaster because it is simple enough that diagrams and models fit on a page and everyone has  some idea of what a toaster does and how it might work. To those “virtuous and pure” engineers whose response is “toasters don’t apply to my work so these examples are useless to me” we remind you that the authors have used these same methods on systems costing hundreds of millions of dollars to develop. Learn the methodologies illustrated by these examples and don’t be put off by errors or incompleteness in these examples or the fact that your systems are much more complex.)

Figure 6-26 An example of a FFBD for a toaster showing the internal and external interfaces for each function of the Operational mode.

A “from” “to” matrix of functions in a particular mode is an alternate tool for defining interfaces for functions. An example is shown in Figure 6-27 for a toaster in its operational mode.

Figure 6-27 A Matrix of Functions to Functions is an alternate tool for defining internal and external interfaces among functions.

N-Squared diagrams are useful tools for analyzing interfaces for systems with functions having many internal interfaces. This tool also provides verification of the grouping and sequencing of lower level functions. It’s much easier to detect sequencing problems in an N-Squared diagram than on a FFBD. An example of an N-Squared diagram used for defining internal and external interfaces is shown in Figure 6-28. The advantages of the N-Squared diagram aren’t apparent in this simple case but imagine if the functions were more randomly sequenced along the diagonal. Then there would be arrows on the left of the diagonal indicating poor sequencing.

It is good practice to develop two different tools for defining internal and external interfaces; for example a FFBD and an N-Squared diagram. The two are then compared to verify that all interfaces are defined, grouped and sequenced correctly and consistent with the definitions of functions in the data dictionary. The small amount of time it takes to verify functional interfaces via two different tools is sound risk mitigation against making a mistake that isn’t discovered until system or subsystem testing when correcting the error is very costly.


Figure 6-28 An N-Squared diagram is an excellent tool for defining, grouping and sequencing interfaces.

Wednesday, March 16, 2011

Allocating Performance Requirements

6.5.2 Allocate Performance and Other Limiting Requirements
It is important not to get caught up in the process of developing the various documents and diagrams and lose sight of the objective that is to develop a new system and that a primary responsibility of the systems engineers is to define complete and accurate requirements for the physical elements of the new system. Having decomposed the top level system modes into their constituent modes and the top level functions of the system into the lower level functions required for each of the decomposed modes the next step is to allocate (decompose) the performance and other constraining requirements that have been allocated to the top level functions to the lower level functions.
The primary path is to follow the FFBDs so that requirements are allocated for every function and are traceable back to the top level functional requirements. Traceability is supported by using the same numbering system used for the functions. Requirements Allocation Sheets may be used, as described in the DoD SEF, or the allocation can be done directly in whatever tool is used for the Requirements Database. Other useful tools are the scenarios, Timeline Analysis Sheets (TLS) and IDEFO diagrams developed during requirements analysis and functional decomposition. If the team followed recommended practice and began developing or updating applicable models and simulations these tools are used to improve the quality of allocated requirements. For example, budgeting the times for each function in a TLS on the results of simulations or models is certainly more accurate than having to estimate times or arbitrarily allocate times for each function so that the time requirement for a top level function is met.
Another example is any kind of sensor with a top level performance requirement expressed as a probability of detecting an event or sensing the presence or absence of something. This type of performance requirement implies that the sensor exhibit a signal to noise ratio in the test and operational environments specified. Achieving the required signal to noise ratio requires that every function in the FFBD from the function that describes acquiring the signal to the final function that reports or displays the processed signal meets or exceeds a level of performance. Analysis either by models or simulations is necessary to balance the required performance levels so that the top level performance is achieved with the required or desired margin without any lower level functions having to achieve performances that are at or beyond state of the art while other functions are allocated easily achievable performances.
Functional trees are very useful for budgets and allocations, particularly con-ops timelines and software budgets since physical elements don’t have time, lines of code (LOC) or memory requirements but functions do. Transforming the FFBD into an indented list of numbered and named functions on a spreadsheet facilitates constructing a number of useful tables and diagrams. Consider a timeline analysis sheet (TLS) for a hypothetical system having two functions decomposed as shown in Figure 6-24.

Figure 6-24 A hypothetical TLS for a system with two functions decomposed into its sub functions.
The TLS illustrates both the time it takes to execute each sub function in a particular con-ops scenario and the starting and stopping times for each time segment. If the functions were to be executed sequentially nose to tail then just the numerical time column would be needed and the total time would be determined by the sum of the individual times.
The same function list can be used for software budgets or allocations. An example is shown in Figure 6-25.


Figure 6-26 Software lines of code and memory can be budgeted or allocated to a list form of the system functions.

Tuesday, March 8, 2011

Functional Flow Block Diagrams

6.5.1.1 Functional Flow Block Diagrams - Graphical models are used to define and depict the sequence of functions making up a higher level function necessary to fulfill requirements. Functional Flow Block Diagrams (FFBD) can be process-oriented models that represent functions as nodes and objects as the connecting links. Nodes are typically labeled but links are typically unlabeled in process-oriented FFBDs. Figure 6-21 is a FFBD for two of the functions of a digital camera. Here only one external interface is identified, i.e. the interface with light from the scene.


Figure 6-21 A FFBD developed as a process-oriented model has named and numbered functions as nodes and objects as links.

Part of the task of functional design is to decompose high level functions into the lower level functions necessary to carry out the action implied by the high level function. For example, the function “image scene” can be decomposed as shown in the FFBD of Figure 6-22. In this example the objects linking the four lower level functions are also labeled. Note that nodes are numbered as well as named with the numbers indicating the level of the functions in the overall hierarchy of functions. The numbers are selected to provide traceability; e.g. sub functions decomposed from a function 1.1 with n sub functions are numbered 1.1.1 to 1.1.n.


Figure 6-22 The function Image Scene 1.1 from Figure 6-18 can be decomposed into four lower level functions.
Notice that functions 1.1.2 and 1.1.3 in Figure 6-22 could be interchanged in the sequence and still be a logical sequence. This is a simple example of having more than one logical sequence of lower level functions. The “best” sequence is determined by conducting design trade studies when these functions are allocated to physical elements. Note also that the functions 1.1.1 and 1.1.2 are easily grouped whereas 1.1.1 and 1.1.3 or 1.1.2 and 1.1.3 are not easily grouped. Therefore the sequence shown is likely to be the preferred sequence, at least until design trade studies are complete.
An alternative to the process-oriented model of a FFBD is an object-oriented model with the nodes and links reversed. An example of an object oriented model is shown in Figure 6-23.



  Figure 6-23 An example of a FFBD as an object-oriented model of two of the functions of a digital camera with named functions as links and named objects as nodes.

The decision to develop process-oriented or object-oriented models depends upon the experience of the systems engineer and the details of the system being designed. A comparison of the two approaches resulting from an analysis of both approaches is shown in the table.


Table 6-1 Selecting an Object-Oriented or Process-Oriented model depends on the details of the specific system design. From the paper Cognitive Fit Applied to Systems Engineering Models by Larry Doyle and Michael Pennotti, Conference on Systems Engineering Research, 2004. 

As with context or domain diagrams it is helpful to have pattern diagrams for all levels of FFBDs representing the class of systems that includes the system being designed. This saves considerable time compared to having to generate the FFBDs for all the levels of a new system under development. The objective is that the pattern diagrams contain all possible functions, sub functions and external and internal interfaces of the entire class of systems. Then the task becomes examining each top level function and interface to determine if it belongs to the new system. If a functions does not belong it is deleted along with all sub functions decomposed from the unneeded top level function. After deleting all unnecessary functions and interfaces from the top level and the sub functions and interfaces traceable to the deleted top level functions and interfaces it is necessary to examine the sub functions in each level to ensure that only necessary ones are kept. The system being designed may include a top level function from the pattern but may not include the entire set of sub functions decomposed from the top level function.   It is also necessary to examine the partitioning and grouping of functions as the best choice is likely to be system design specific and not necessarily that captured in the pattern diagrams.

Tuesday, March 1, 2011

6.4 Functional Analysis and Allocation


Figure 6-5, the list of 15 tasks in the post of December 29 titled Requirements Analysis, shows that Functional Analysis and Allocation is necessary to accomplish subtask 10, Define Functional Requirements. Functional analysis decomposes each of the high level functions of a system identified in requirements analysis into sets of lower level functions. The performance requirements and any constraints associated with the high level functions are then allocated to these lower level functions. Thus the top level requirements are flowed down to lower levels requirements via functions. This decomposition and allocation process is repeated for each level of the system. The objective is to define the functional, performance and interface design requirements. The result is called the Functional architecture and the collection of documents and diagrams developed is the Functional view.  
A function is an action necessary to perform a task to fulfill a requirement. Functions have inputs and outputs and the actions are to transform the inputs to the outputs. Functions do not occupy volume, have mass, nor are they physically visible. Functions are defined by action verbs followed by nouns; for example, condition power or convert frequency. A complex system has a hierarchy of functions such that higher level functions can be decomposed into sets of lower level functions just a physical system can be decomposed into lower level physical elements. A higher level function is accomplished by performing a sequence of sub-functions. Criteria for defining functions include having simple interfaces, having a single function each, i.e. one verb and one noun; having independent operations and transparency, i.e. each does not need to know the internal conditions of the others. There are both explicit and implicit or derived functions. Explicit functions are those that are specified or decomposed from specified functions or performance requirements. Implicit or derived functions are those necessary to meet other specified capabilities or constraints.
Sometimes inexperienced engineers ask why they have to decompose and allocate  functions for design elements at the subsystem or lower levels; they believe once they know the top level function, the performance requirements and any constraints defined in the requirements analysis task they can design the hardware and software without formal functional analysis/allocation. One simple answer is that items like time budgets and software lines of code are not definable from hardware or software; they are defined from the decomposed functions and sequences of functions. This is because hardware or software does not have time dimensions, only the functions the hardware or software performs have time attributes. Similarly the question of how many lines of code or bits of memory only have meaning in terms of the functions the software code is executing. Other reasons become more apparent as we describe functional design and design synthesis.
A diagram, simplified from Figure 6-4 and shown in Figure 6-19, helps understand both the question asked by inexperienced engineers and the answer to their question.

Figure 6-19 Developing a hardware/software system design is iterative and has multiple paths.

Figure 6-19 illustrates that although the primary design path is ultimately from requirements to hardware/software design; primary because when the design is complete the resulting hardware/software design fulfills the requirements, other paths are necessary and iteration on these paths is necessary to achieve a balanced design. The paths between requirements and functional design and between functional design and hardware/software design are necessary to:
  • Validate functional behavior
  • Plan modeling and simulation
  • Optimize physical partitioning
  • Facilitate specification writing
  • Facilitate failure analysis
  • Facilitate cost analysis and design to cost efforts
  • Facilitate concept selection
  • Define software budgets and CON-OPS timelines.
Although the path from requirements analysis to design synthesis isn’t formally shown in Figure 6-4 it is used when a team elects to design a product around a particular part, such as a state of the art digital processor or new and novel signal processing integrated circuit chip. However, having preselected a part doesn’t negate the need to define the functions performed by the part and verify that the performance requirements and interfaces are satisfied.
6.4.1 Decompose to Lower-level Functions – Decomposition is accomplished by first arranging the top level functions in a logical sequence and then decomposing each top level function into the logical sequence of lower level functions necessary to accomplish the top level functions. Sometimes there is more than one “logical” sequence. It is important to examine the decomposed functions and group or partition them in groups that are related logically. This makes the task of allocating functions to physical elements easier and leads to a better design, as will be explained in a later section. When more than one grouping is logical then trade studies are needed. Although the intent is not to allocate functions to physical entities at this point functions should not be grouped together if they obviously belong to very different physical elements. The initial grouping should be revisited during the later task of allocating functions to physical elements and this process is described in more detail in a following section.
The DoD SEF has an excellent description of functional analysis/allocation and defines tools used to include Functional Flow Block Diagrams (FFBD), Time Line Analysis, and the Requirements Allocation Sheet. N-Squared diagrams may be used to define interfaces and their relationships. Spreadsheets are also useful tools and are used later in various matrices developed for validation and verification. A spreadsheet is the preferred tool for developing a functions dictionary containing the function number, the function name (verb plus noun) and the detailed definition of each function and its sub functions. The list of function numbers and names in the first two columns of the functions dictionary can be copied into new sheets for the matrices to be developed in the design synthesis task.
Spreadsheets do not lend themselves to identifying the internal and external interfaces as well as FFBDs so the FFBD is the preferred tool for decomposition. Time Line Analysis and the Requirements Allocation Sheet are well described in Supplement 5 of the DoD SEF and need no further discussion here. Similarly the Integration Definition for Function Modeling (IDEFO) is a process-oriented model for showing data flow, system control, and the functional flow of life cycle processes. It is also well described in Supplement 5 of the DoD SEF. The collection of documents, diagrams and models developed using all of the tools is the Functional view. The functional architecture is the FFBDs and time line analyzes that describe the system in terms of functions and performance parameters.
Although the FFBD is discussed in the DoD SEF there are some additional important aspects of this tool that are covered in the next posting.

Tuesday, February 22, 2011

Spider Diagrams

6.3.3.2 Spider Diagrams - Another useful diagram is the spider diagram; named from its shape.  Spider charts are useful for tracking progress of requirements that are interrelated so that careful attention is needed to avoid improving one at the expense of others. An “ility” diagram is a good example of a spider chart that tracks progress toward goals or requirements for selected ilities; i.e. reliability, maintainability, reparability, etc. An ility diagram treats the system or product as a single entity. It is constructed by selecting and ranking in importance eight ilities as described by Bart Huthwaite in his book “Strategic Design” cited earlier. Initially measures and goals are established for each ility. Later, as the system design takes form initial estimates or calculations of each ility are made and plotted on the diagram. As system design trades progress and the design is refined the progress toward goals in tracked. By including the most important eight ilities and tracking their measures on a chart the design team has a clear picture of how well the design is progressing and if any ility is being sacrificed for the benefit of some other ility or performance parameter. An example ility chart as it might appear in the middle of system trades is shown in Figure 6-18. Here the eight ilities are numbered in rank order of importance so that if tradeoffs cannot be avoided the most important ilities are favored over less important one. The line connecting the eight legs is the CBE for each ility. Trends can be indicated by keeping the previous three or four CBEs.


Figure 6-18 An example of the form of an “ility” chart that tracks progress toward goals for eight ilities ranked in importance from 1 to 8.

6.3.4 Balancing Customer Needs
A major goal of requirement analysis is to ensure the set of requirements are complete, accurate, and non-redundant. An additional activity during the requirement analysis is to identify opportunities to optimize the customer’s desires and needs. This task is considered part of the Technical Management task shown in Figure 6-4 in a previous post. Working with the customer to adjust requirements often presents an opportunity to save cost or schedule with little impact on the customer. This may also provide significant improvements in a highly desired performance parameter with little impact on a lesser desired parameter. These changes are best identified during requirement analysis. As the program moves into the architecture, then design phase, changes become more costly providing less opportunity for savings. The understanding that a very large percentage of the program cost and schedule are established very early in the program is an important responsibility of the system engineer. Requirements drive the system architecture trades, and once the architecture is defined, the opportunity for savings is a small percentage of the opportunity early in the program. Pugh diagrams (to be described in Chapter 8) and QFD tables (defined in Chapter 7) are valuable tools for assessing these opportunities.

Tuesday, February 15, 2011

Processes for Technical Performance Measures (TPM)

6.3.3 Define Performance and Design Constraint Requirements
Tasks 11, 13 and 14 shown in Figure 6-5, posted Wed. Dec. 29, 2010, are related to performance and design characteristics and are discussed together here. During the analysis of market data, customer documentation and user needs, and in constructing P-Diagrams and Quality Function Deployment (QFD) QT-1 tables the top level system performance parameters and key design characteristics such as size, mass, power and data rates are identified.  These parameters must be quantified and metrics must be defined that enable the team, management and customers to track progress toward meeting these important requirements. It is also necessary to judge which design characteristics are rigid constraints and which can be modified in design trade studies if necessary.
6.3.3.1 Define Tracking Metrics (KPPs & TPMs) Developing a consistent process for constructing and updating KPPs, TPMs and any other tracking metrics needed is essential to providing confidence to both the team and their managers and customers that work is progressing satisfactorily or if critical requirements are at risk. It is good practice, as defined by IEEE task 11, to begin the selection of tracking metrics and setting up tracking procedures during the requirements analysis task even though the design work is not yet dealing with physical design. One widely used process is presented here. Only TPMs are discussed but the process can be used for other metrics as well.  The benefits of TPMs include:
  • Keeping focus on the most important requirements (often called Cardinal Requirements).
  • Providing joint project team, management and customers reliable visibility on progress.
  • Showing current and historical performance Vs. specified performance
  • Providing early detection and prediction of problems via trends
  • Being a key part of a design margin management process
  • Monitoring identified Risk areas
  • Documenting key technical assumptions and direction

The steps in generating TPMs include:
  1. Identify critical parameters to trend (e.g., from a QT-1)
  2. Identify specification & target values and the design margin for each parameter
  3. Determine the frequency for tracking and reporting the TPMs (e.g. monthly)
  4. Determine the current parameter performance values by modeling, analysis, estimation or measurement (or combination of these)
  5. Plot the TPMs as a function of time throughout the development process
  6. Emphasize trends over individual estimates
  7. Analyze trends and take appropriate action  

Some systems have a very large number of TPMs and KPPs. Rather than report every metric in such cases develop a summary metric that reports the number or percent of metrics within control limits and having satisfactory trends. Metrics not within control limits or having unsatisfactory trends are presented and analyzed for appropriate actions. This approach works well if those reviewing the metrics trust the project team’s process.
It is recommended to track and report both margin and contingency with the current best estimate (CBE). Unfortunately there are no standard definitions for these parameters. Example definitions are shown in Figure 6-13.

Figure 6-13 Example definitions for margin and contingency where contingency is defined as the current best estimate plus/minus one sigma.

An example of a TPM tracked monthly for a characteristic for which smaller-is-better, e.g. mass or power, is shown in Figure 6-14 with the plotted points being the CBE plus contingency.

Figure 6-14 An example TPM chart for a characteristic for which smaller-is-better.

Included in the chart is the specification value, shown as a horizontal line, a target value selected by the project team and plotted as a point at the planned end of the development cycle, a margin depletion line, upper and lower control limits as well as the plotted values of CBE plus a contingency. Selecting the target value, slope of the margin depletion line and the control limits is based on the judgment of the project team.
Smaller-is-better characteristics like mass and power consumption tend to increase during the development so initial values should be well under the specification value. Thus a reasonable strategy for selecting the margin depletion line is to link the initial estimate with the final target value. Control limits are selected to provide guidance for when design action is needed. If the characteristic exceeds the upper control limit design modifications may be needed to bring the characteristic back within the control limits. If the characteristic is below the lower control limit then the team should look for ways that the design can be improved by allowing the characteristic to increase toward the margin depletion line value at that time. For example, in Figure 6-14 the first three estimates trend to a point above the upper control limit. The fourth point suggests that the project team has taken a design action to bring the characteristic back close to the margin depletion line.
Many requirements must be broken down into separate components, attributable to each contributing factor; e.g. mass of a system can be broken down into the mass for each subsystem. TPMs for such requirements are a rollup of the contributing components.  A variation on a TPM particularly useful for such requirements is to explicitly include the basis of estimate (BOE) methods of arriving at the CBE value each period. Typical BOE Methods are allocation, estimation, modeling and measurement. Allocation is the process of taking a top-level roll-up and budgeting a specified amount to each contributing component. Estimation, modeling and measurement are methods applied to each contributing component and then combining them to determine the result for the top level requirement. Figure 6-15 is an example of a smaller-is-better characteristic with the CBE shown as a stacked bar with the proportion of the CBE from each method explicitly shown.

Figure 6-15 An example of a TPM chart where the plotted values include the basis of estimate methods to provide more information about the confidence in reported values.

The proportions allocated and estimated drop as modeling and measurements are made, which increases the confidence in the reported CBE value. In the example shown the CBE is the sum of values for each contributing component with different uncertainties and the plotted values, i.e. the top of the stacked bar are the CBE plus the RMS of the uncertainties, i.e. the contingency in this case. For a larger-is-better characteristic the plotted value is the CBE minus the RMS of the uncertainties assuming the definitions in Figure 6-13.
One alternative method of tracking and reporting metrics is using spreadsheet matrices rather than charts. This approach allows the values for each contributing component to be tracked and reported. An example for a smaller-is-better characteristic attributable to three component subsystems is shown in Figure 6-16.

Figure 6-16 An example TPM for a smaller-is-better characteristic attributable to subsystems.

The total allocated value can be either the specification value or the specification value minus a top level margin, i.e. a target value. The BOE methods for each contributing component CBE can be included in additional columns if desired. The disadvantage of a matrix format compared to a time chart is that it isn’t as easy to show trends with time, margin depletion line value and control limits. Matrices are useful for requirements that involve dozens of contributing components; each important enough to be tracked. The detail can be presented in a matrix with color coding to highlight problem areas and either a summary line or summary chart used for presentations to management or customers.
A third alternative for tracking metrics is a tree chart. An example using the data from Figure 6-16 as mass in Kg is shown in Figure 6-17.

Figure 6-17 An example of a TPM in a tree chart format.

A tree chart has similar disadvantages as a matrix compared to a time chart but both trees and matrices have the advantage of showing the values for contributing components. Sometimes it is valuable to use both a time chart and a tree or matrix for very important TPSs or KPPs.
Common sense guidelines to follow in managing tracking metrics include:
  • Keep metrics accurate and up to date
  • Be flexible with metric categories (add, delete, modify as development progresses)
  • Dovetail into customer process
  • Plan ahead to achieve maturing Basis Of Estimates (BOE)
  • Make performance prediction part of normal design process (flow-up, not run-down)
  • Include in program status reports, e.g. monthly.
It should be part of each IPT’s normal work to report or post to a common database the estimates for metrics under their design responsibility. This makes it easy for one of the system engineers to compile up to date and accurate metrics for each reporting period. If this is not standard practice then someone has to run down each person responsible for part of a metric each period to obtain their input; a time consuming and distasteful practice.