Search This Blog

Tuesday, April 26, 2011

Introduction to Quality Function Deployment

Chapter 7 is the work of David Melton. David is an outstanding mechanical/thermal engineer, a highly experienced system engineer and executive manager. He has extensive training in Quality Function Deployment and years of experience putting it into practice. This chapter can be used as a stand alone guide for those desiring to use QFD as their primary systems engineering process or to augment more traditional systems engineering.

7        Quality Function Deployment (QFD) In System Engineering
7.0 Introduction
System development is a complex process. Bringing a product or system from concept through production, deployment (distribution) is generally called the Product Development Process. Consideration of product development as a process requires looking at a network of tasks that are necessary to bring the product to market and provide a product or system that fulfills the “Voice of the Customer “(VOC). The VOC represents a definition of the needs and wants of the customer. Regardless of how it is presented the product development process is exceedingly complex. It consists of numerous offs, shared responsibilities and interpretation of differences often resulting in conflicting priorities. A substantial body of technical knowledge must be employed (and deployed) often over a relatively long time frame, while experiencing constant resource changes. The product development process requires a great deal of communication and a substantial work effort from many different functional groups.  Product development can no longer be viewed as simply a Design Engineering Function.  Design Engineering is but one process of several interrelated processes and functions involved in developing a quality product. Ultimately the communication of information gets back to the customer in the form of a product/service. To minimize the risk and accomplish this effort successfully requires an effective communication and tracking tool and methodology.  The implantation of the QFD method can achieve this objective.
A system, however complex, must be carefully planned out to minimize subsequent redesign.  The full effect of inadequate planning (or understanding of relationships) is rarely detected until late in the development cycle or not until hardware is fabricated or code written.  The later any design change or design defects are detected the more time and money a redesign effort incurs. System Engineering is primarily about understanding relationships and interdependencies during system development. The System Engineering objective is to translate customer requirements to product(s) and service(s) that fulfill the customer’s needs. Systems Engineering has emerged as a distinct professional discipline in direct response to the increasing complexity of new development projects for all market applications.
Quality Function Deployment (QFD) is a systematic process for translating customer requirements into appropriate company requirements at each stage from research and product development to engineering and manufacturing to market/sales and distribution.  The output of the QFD process is a series of matrices that define critical parameters and requirements throughout the product life cycle.  A Product Life Cycle (PLC) is a sequence of stages that a product goes through from conception through design, production, distribution and final phase out, (commonly called cradle to grave)
The prime assumption underpinning system engineering is that a product should be designed to fulfill the customer’s actual needs. Self-evident as this approach may seem, it is surprisingly common for companies to develop products with little or no customer input or confirmation of perceived customer needs. Even when a large market exists, a product can fail when the customer's real needs are poorly understood or improperly deployed to the subsequent design process. The QFD process applied to hardware development is very similar and complementary to the recommended System Engineering (SE) process.  Much of the information generated by the QFD process is information needed to complete the System and Component specifications.  One added benefit that QFD brings to the SE process is a method for prioritization of requirements.  OFD also provides a method for identifying the critical design requirements (i.e. Cardinal requirements).  QFD provides a pictorial illustration of the System and Component Specifications and shows where requirements are allocated to the subsystems.
The QFD process is complementary and beneficial to decision making in the system development stage. QFD is a structured process to identify these relationships and determine which are most important in driving requirements and system development.  QFD generates the skeleton structure (architecture) for the system and subsystem specifications. When QFD is integrated into the system development process, it provides value-added information and knowledge to aid decision making while optimizing the product design
This section:
·         Defines QFD and how it can be integrated into the SE and System Development process to provide complementary benefits and aid decision making in defining and specifying a system.
·         Explains the benefits and features of using QFD in the System Development process.
·         Presents a simple structured system engineering approach to product development using QFD prior to detailed product design.
·         Focuses on the hardware development, but in general the concepts can be applied to software development to capture customer needs and requirements and flow requirements through algorithm development
Standard system engineering techniques have been defined in the United States (US) Department of Defense (DoD) and NASA for decades; and these techniques are also applied to large commercial products, e.g. in automotive and aircraft industries.  The health care sector has introduced QFD as a means to develop systems for health care. The QFD initiative in the US for hardware developed items follows a structured and disciplined process very analogous to the SE process defined by DoD. DoD SE methodology standardizes the flow-down and traceability of specifications for complex products from customer requirements through production, operation, and disposal. SE integrates all of the disciplines and specialty groups into a team effort forming a structured development process that proceeds from concept to production to operation.
The principles of system engineering using QFD span the entire life cycle of a product, but this chapter is concerned with the early feasibility and concept stages. Studies show that a large percent of the product's manufacturing cost is frozen at concept selection time. Many companies in all industries historically initiate product cost reduction efforts after the product is released to the production floor. If say, 85% of the manufacturing cost is frozen at concept development time, then post production release cost reduction is saving cost on 15% of manufacturing effort. The value-added return for the investment is likely to be low. Therefore, to make significant cost reductions, development effort must focus on the early concept selection stage where alternative concepts and technology are considered.
7.1 Background
The word quality in QFD has led to much misunderstanding. QFD was first introduced in most organizations through the Quality Assurance departments. In the QFD process several functional organizations other than the quality department are vital participants. Because the name can be misleading, QFD has been given a bad connotation. QFD is not a quality tool to audit functional organizations, rather it is a structured planning tool to guide and direct the product development process. Let us therefore not be resistant to the use of QFD because of the name, but rather seek to understand what QFD embodies.

Quality Function Deployment (QFD) is a translation of six Japanese Kanji characters:

HIN SHITSU KI NO TEN KAI

As with any translation there is room for other interpretations. Each pair of the Kanji characters has alternate translations; Figure 7-1 illustrates these different translations. The most accepted interpretation is Quality Function Deployment (QFD).

  Figure 7-1 The Kanji characters for QFD have several alternate translations.

QFD has a broad meaning. It involves taking the features of a product driven by the customer's needs and evolving the product functions into an overall product. We may think of QFD as the act of taking the voice of the customer (VOC) or user all the way through the product development process to the factory floor and out into the market place. QFD is therefore more than a quality tool, but an important planning tool for introducing new products and upgrading existing products.

7.1.1 Features of QFD - As previously stated, QFD:
·         Is a systematic means of ensuring that the demands of the customer and the market place are accurately translated into products and/or services.
·         Is a structured approach that provides both a planning tool and a process methodology.
·         Identifies the most important product characteristics, the necessary control issues and the best tools and techniques to use. 
·         Applies to all stages of product development and provides a comprehensive tracking tool and communication medium.
·         Applies a cross functional team approach combining information and expertise from marketing, sales, design engineering and manufacturing.
·         Provides a systematic and disciplined method of creating priorities, making improvements, and defining goals and objectives applicable to the company's products and/or services.
QFD is a method; it is not a panacea, it must be done correctly and it takes up front time and resources to get the best possible results.
7.1.2 Benefits of QFD - QFD is a relatively simple but highly detailed process. Upon initial evaluation it may appear to be too detailed - perhaps not worth the effort. However QFD has proven benefits, including:
·         A PROPRIETARY KNOWLEDGE BASE;
The QFD process leads the participants through a detailed thought process, pictorially documenting their approach. The graphic and integrated thinking that results, leads to the preservation of technical knowledge, minimizing the knowledge loss from retirements or other organizational changes. This use of QFD helps transfer knowledge to new employees, starting them higher on the learning curve. The use of QFD charts results in a large amount of knowledge captured and accumulated in one place. The charts provide an audit trail of the decisions made by the project team. Once a QFD project has been completed, the resulting charts may be used as a starting point for future versions, (a “re-engineering starting point”) for similar products. The bottom line of QFD is higher quality, lower cost, and shorter development time resulting in a substantial competitive advantage.
2.      SATISFIED CUSTOMERS; QFD forces increased understanding of customer requirements because it is driven by the voice of the customer, rather than the voice of the engineer or executive. By focusing on the customer, numerous engineering decisions are guided to favor the customer. Whereas numerous trade-offs are always necessary for any well optimized product, these trade-offs are made for customer satisfaction not for engineering convenience.
3.      FEWER START-UP PROBLEMS: The preventive approach fostered by QFD results in fewer downstream problems, especially at production startup.
4.      LOWER START-UP COST; This translates directly into reduced start-up costs
5.      LESS TIME IN DEVELOPMENT: This approach not only saves money, it also saves overall development time. Product introduction cycle time has been shown to be a third to a half shorter by using QFD to thoroughly plan the product or service.
6.      FEWER FIELD PROBLEMS; The cost savings has been demonstrated to continue well beyond startup, and is reflected in reduced problems for customers and consequent warranty cost reduction.
7.      FEWER AND EARLIER CHANGES; A major advantage of QFD is that it promotes preventive rather than reactive development of products.  QFD is a preventive approach that has demonstrated fewer downstream production problems; especially at production start-up; commonly referred to as “the transition from development to production”.

Tuesday, April 19, 2011

Checking the Partitioning of the Physical Architecture

6.6.4 The Design Loop
As the physical design is created many alternatives should be considered. One task is to check that the grouping and sequencing of functions defined during the functional analysis task leads to an effective physical partitioning, i.e. a modular design as described in the previous section. If reasonable physical designs don’t result in subsystems being allocated to single functions or single groups of functions then recheck the grouping of functions to see if alternative grouping lead to cleaner physical partitioning and more modularity. It is important to seek clean partitioning of subsystems and their associated functions because the cleaner the partitioning the easier systems are to integrate and test, maintain and upgrade. A function that is implemented in two or more subsystems results in system designs that are more difficult to maintain and upgrade and are often more difficult to test. Envision the design loop as iteration between functional and physical design until both result in a modular physical architecture.
A second task during design synthesis is conducting trade studies, described in a later chapter, to select between design alternatives. Again when evaluating alternative architectures consider the partitioning for each design alternative and examine the possibility that modifying the functional architecture might lead to a better functional to physical allocation and partitioning.

6.6.4.1 Functional to Physical Allocation Matrices
A simple tool that is helpful in refining the functional and physical architectures is a functional to physical allocation matrix. An example matrix for the toaster functional architecture shown in Figure 6-24 and the design concept architecture shown in Figure 6-31 is shown in Figure 6-33. The functional to physical allocation matrix is particularly helpful in examining the partitioning of a design concept for modularity. Typically the more diagonal this matrix the better the modularity. However, opportunities for one physical entity to perform two or more functions are highly desirable and readily apparent in the matrix. Similarly, when the matrix shows a function spread across several physical entities the matrix provides a visual means of examining if the physical design concept is sound or if it should be changed to allow cleaner partitioning. Sometimes the nature of a function causes it to be spread across several physical entities without complicating the design in ways that cause manufacturing, testing or upgrade problems. For example, in Figure 6-32 the “apply heat” function is allocated to three entities and this is probably a reasonable design approach because it likely reduces the parts count and makes operation simpler.


Figure 6-33 A functional to physical allocation matrix for one candidate toaster design concept.


Tuesday, April 12, 2011

Guidelines for achieving the best design concept

6.6.2 Decision Management during Design
The degrees of freedom of a design are greatest and the cost to make a design change is lowest during concept design. This is illustrated schematically in Figure 6- 32. The high degrees of design freedom means that design alternatives are relatively unconstrained as long as they map to the functional architecture and meet the functional requirements. In general, the greater the design degrees of freedom the greater the potential for influencing performance, life cycle cost and other important measures of design quality. Decisions made on top level architecture during concept design not only directly reduce the degrees of freedom but these decisions often constrain the design alternatives available at lower levels of the system hierarchy addressed in preliminary and detailed design. This argues strongly for conducting the most extensive exploration of design alternatives during concept design.

Figure 6-32 Design alternatives cost less to explore and have a greater potential influence during concept design.

The objective is to sufficiently explore alternative concepts that high confidence is achieved that the selected design concept is “best” from a number of measures. These measures include the obvious of high performance on high priority customer requirements, low life cycle cost, excellent “ility” measures (manufacturability, testability, reparability, etc.), and perhaps attractive features that might increase sales. Thus there is a tension between the need to make design decisions quickly and to explore a wide range of design alternatives.   Once the desired concept design is established, i.e. a baseline design is defined and trade studies are conducted to select the best alternative for the final baseline, then the design freedom is reduced so the opportunities to significantly improve the design are also reduced.
A standard approach to achieving the desired characteristics in concept designs is to seek modular designs. Here the term module refers to design elements, i.e. subsystems, assemblies etc. Modular designs are achieved by refining the allocation of functions to physical modules and partitioning functions between the modules.

6.6.3 Partition for Modular Designs
The DoD SEF says modular designs have the three desirable attributes of low coupling, high cohesion, and low connectivity. Coupling is the amount of information shared between modules; the lower the amount of information that must flow between modules the more independent they are. Having low dependence lowers design risk and makes future upgrades or modifications easier. Cohesion is the similarity of tasks performed within a module. High cohesion leads to easier and less complex designs. A design for which a single component performs multiple functions has high cohesion. Connectivity is a measure of the internal interfaces between modules. A design that has multiple interconnections between the internal parts of one module and those of a neighboring module has undesirable high connectivity, which again complicates design, integration and testing as well as future upgrades.

Note that modularity is a measure of system complexity; the higher the modularity the lower the complexity. Risk is a measure of the complexity of the development program; the higher the risk or the more risks that a program has the more complex the development becomes due to the work necessary to mitigate risk. Modularity and risk are related. A system concept design with low modularity is usually higher risk than a design with high modularity. Therefore to achieve the lowest program risk design concepts should be traded to find the highest modularity. However, risk must be evaluated for each concept to ensure that in striving for higher modularity unnecessary risks haven’t been introduced.

Tuesday, April 5, 2011

Introduction to Concept Design

6.6 Design Synthesis
Completing the initial definition of the functional architecture sets the stage for beginning design synthesis. The design synthesis task defines physical elements of hardware and software to carry out the functions in the functional architecture and to fulfill the requirements allocated to the functions. It is an allocation and partitioning task. Allocation refers to the mapping of functions to physical elements and partitioning refers to the grouping of functions and physical elements. It’s helpful if the functional architecture is defined to the second level, at least in draft form, before beginning design synthesis. Design synthesis is done in steps. Usually the steps are called concept design, preliminary design and detailed design. Each step adds more detail to the design and defines the design to lower levels of the system hierarchy. At the completion of detailed design a complete set of procurement documentation, manufacturing drawings, detailed software descriptions and integration and test (I&T) documentation is finalized and ready for procurement of parts, manufacturing, software coding and I&T.
6.6.1 Concept Design
Concept design is emphasized here as systems engineers have a greater role in the concept design than in preliminary and detailed design. The objective of concept design is to convert the functional architecture to a physical architecture. In this process the functional architecture and the allocated requirements may be refined and other supporting documentation developed. Three outputs from design synthesis during concept design are a physical architecture, a baseline design and a physical view of the system.
The physical architecture is defined by a physical block diagram or signal flow block diagram that schematically illustrates the relationships and interfaces between the physical subsystems (hardware and software) that map to the functional architecture. The physical architecture is part of the system architecture, which includes the enabling products and services needed by the system in all of its life cycle modes. An example of a simple physical block diagram of a candidate concept design for the toaster defined by the functions shown in Figure 6-24 is shown in Figure 6-31. (Again, apologies to toaster designers for any ignorance of toaster design.)


Figure 6-31 A physical block diagram for a candidate toaster concept design.

Systems that involve the collection, processing and communication of signals or similar information are often better described by a signal flow diagram. A signal flow diagram is a physical block diagram that follows the system signals from initial collection to their output from the system. Modularity is often easier to visualize in signal flow diagrams than block diagrams. Signal flow diagrams are typically more complex than simple block diagrams so it’s usually best to define alternative concepts and conduct trade studies using simple block diagrams. Once the final baseline concept is selected then constructing a signal flow diagram helps explain the selected design concept better than a simple block diagram.
The baseline design includes the functional architecture, the physical architecture, the system specification, and the ICD. The baseline design evolves with the design maturity and is the basic item under configuration management. The baseline design is a means for facilitating decision management during the three stages of design synthesis. At each stage; concept design, preliminary design and detailed design; it is good practice to force the work to a baseline design quickly and then conduct trade studies to refine the selected baseline. Otherwise too many design decisions are open at any time and control of the design work becomes difficult.
The physical view includes all of the diagrams, documents, models, etc. that describe how the system is constructed, how it interfaces with humans and other supporting systems during the life cycle modes, any customer supplied equipment, and any constraints on the design or operations.




Tuesday, March 29, 2011

Methods for Verifying Functional Architecture

6.5.4 Verify the Functional Architecture
The functional architecture is the FFBDs and the allocated requirements. The collection of all documentation developed during the functional analysis and allocation task is called the functional view. The final task in defining the functional architecture is to review all of the functional view documentation for consistency and accuracy. Check the functions defined for each mode and sub mode to verify that no functions are missing and that the requirements allocated to each function are appropriate for each mode and sub mode. An example of a matrix of modes to functions useful for verifying that all top level functions needed for each sub mode of a toaster in its In Use Mode are defined properly is shown in Figure 6- 29. This example only examines one system mode but the process for examining all modes and all lower level functions is just an extension of the matrix.
The methodology of verifying functional design by using two different tools to describe the same functional requirements also applies to mode transitions. Figure 6-30 is an example of a matrix used to define the allowed transitions among modes of the In Use mode as previously defined with a mode transition diagram in Figure 6-11. Although this is a trivial example it illustrates the methodology.


Figures 6-29 An example of a Functions to System Modes matrix that facilitates verifying that all functions are defined for all modes.

Figure 6-30 Allowable mode transitions can be defined in a matrix as well as in a diagram.

Revisit the documentation in the operational view to verify that the functional architecture accounts for every function necessary to fulfill the operational requirements and that no unnecessary functions have been added. Verify that every top level performance and constraining requirement is flowed down, allocated and traceable to lower level requirements and that there are no lower level requirements that are not traceable to a top level requirement.

Tuesday, March 22, 2011

Tools for Defining and Verifying Functional Interfaces

6.5.3 Define and Verify Functional Interfaces (Internal/External)
Logical interfaces with external elements are defined in the context diagram and the FFBDs and internal interfaces are defined in the FFBDs. Both types of interfaces must be analyzed to verify that all interfaces are properly located and defined. Examine each external interface and verify that the information coming from or going to the interface matches the information being handled by the parent function in the chain of lower level child functions. Similarly examine each function and verify that all information coming from or going to the function is accounted for; that no function has an output that doesn’t go to either another function or to an external logical interface; and that no function requires information that is not coming to the function from another function or external interface. This task is made easier if the links in a process-oriented FFBD are labeled. An example of a simple process-oriented FFBD of a toaster with internal and external interfaces is shown in Figure 6-26.
(Apologies to experienced designers of toasters for any mistakes by the authors who have limited domain knowledge of toaster design. We use the example of a toaster because it is simple enough that diagrams and models fit on a page and everyone has  some idea of what a toaster does and how it might work. To those “virtuous and pure” engineers whose response is “toasters don’t apply to my work so these examples are useless to me” we remind you that the authors have used these same methods on systems costing hundreds of millions of dollars to develop. Learn the methodologies illustrated by these examples and don’t be put off by errors or incompleteness in these examples or the fact that your systems are much more complex.)

Figure 6-26 An example of a FFBD for a toaster showing the internal and external interfaces for each function of the Operational mode.

A “from” “to” matrix of functions in a particular mode is an alternate tool for defining interfaces for functions. An example is shown in Figure 6-27 for a toaster in its operational mode.

Figure 6-27 A Matrix of Functions to Functions is an alternate tool for defining internal and external interfaces among functions.

N-Squared diagrams are useful tools for analyzing interfaces for systems with functions having many internal interfaces. This tool also provides verification of the grouping and sequencing of lower level functions. It’s much easier to detect sequencing problems in an N-Squared diagram than on a FFBD. An example of an N-Squared diagram used for defining internal and external interfaces is shown in Figure 6-28. The advantages of the N-Squared diagram aren’t apparent in this simple case but imagine if the functions were more randomly sequenced along the diagonal. Then there would be arrows on the left of the diagonal indicating poor sequencing.

It is good practice to develop two different tools for defining internal and external interfaces; for example a FFBD and an N-Squared diagram. The two are then compared to verify that all interfaces are defined, grouped and sequenced correctly and consistent with the definitions of functions in the data dictionary. The small amount of time it takes to verify functional interfaces via two different tools is sound risk mitigation against making a mistake that isn’t discovered until system or subsystem testing when correcting the error is very costly.


Figure 6-28 An N-Squared diagram is an excellent tool for defining, grouping and sequencing interfaces.

Wednesday, March 16, 2011

Allocating Performance Requirements

6.5.2 Allocate Performance and Other Limiting Requirements
It is important not to get caught up in the process of developing the various documents and diagrams and lose sight of the objective that is to develop a new system and that a primary responsibility of the systems engineers is to define complete and accurate requirements for the physical elements of the new system. Having decomposed the top level system modes into their constituent modes and the top level functions of the system into the lower level functions required for each of the decomposed modes the next step is to allocate (decompose) the performance and other constraining requirements that have been allocated to the top level functions to the lower level functions.
The primary path is to follow the FFBDs so that requirements are allocated for every function and are traceable back to the top level functional requirements. Traceability is supported by using the same numbering system used for the functions. Requirements Allocation Sheets may be used, as described in the DoD SEF, or the allocation can be done directly in whatever tool is used for the Requirements Database. Other useful tools are the scenarios, Timeline Analysis Sheets (TLS) and IDEFO diagrams developed during requirements analysis and functional decomposition. If the team followed recommended practice and began developing or updating applicable models and simulations these tools are used to improve the quality of allocated requirements. For example, budgeting the times for each function in a TLS on the results of simulations or models is certainly more accurate than having to estimate times or arbitrarily allocate times for each function so that the time requirement for a top level function is met.
Another example is any kind of sensor with a top level performance requirement expressed as a probability of detecting an event or sensing the presence or absence of something. This type of performance requirement implies that the sensor exhibit a signal to noise ratio in the test and operational environments specified. Achieving the required signal to noise ratio requires that every function in the FFBD from the function that describes acquiring the signal to the final function that reports or displays the processed signal meets or exceeds a level of performance. Analysis either by models or simulations is necessary to balance the required performance levels so that the top level performance is achieved with the required or desired margin without any lower level functions having to achieve performances that are at or beyond state of the art while other functions are allocated easily achievable performances.
Functional trees are very useful for budgets and allocations, particularly con-ops timelines and software budgets since physical elements don’t have time, lines of code (LOC) or memory requirements but functions do. Transforming the FFBD into an indented list of numbered and named functions on a spreadsheet facilitates constructing a number of useful tables and diagrams. Consider a timeline analysis sheet (TLS) for a hypothetical system having two functions decomposed as shown in Figure 6-24.

Figure 6-24 A hypothetical TLS for a system with two functions decomposed into its sub functions.
The TLS illustrates both the time it takes to execute each sub function in a particular con-ops scenario and the starting and stopping times for each time segment. If the functions were to be executed sequentially nose to tail then just the numerical time column would be needed and the total time would be determined by the sum of the individual times.
The same function list can be used for software budgets or allocations. An example is shown in Figure 6-25.


Figure 6-26 Software lines of code and memory can be budgeted or allocated to a list form of the system functions.