Design for Six Sigma

 Common DFSS Methodologies


Design for six sigma(DFSS) is the suggested method to bring order to product design. Hockman, Suh, and Paul, have noted that 70% – 80% of all quality problems are design related. Emphasis on the manufacturing side alone will concentrate at the tail end of the problem solving process. The emphasis should be at the front end. Problem solving at the downstream end is more costly and time consuming than fixing it at the source.  In 1999, NIST reported that the automotive supply chain lost at least a billion dollars a year due to poor interoperability of digitally designed product data. There has been considerable emphasis in recent years by American industry in downsizing, restructuring, process redesign, and instituting cost containment, etc. These methods are directed at holding the line on costs. This can be described as denominator management. In the business world, the equation for return on investment, or return on net operating assets, has both a numerator – net income, and a denominator – investment. Managers have found cutting the denominator, investments in people, resources, materials, or other assets is an easy way to make the desired return on investment rise (at least short-term). To grow the numerator of the equation requires a different way of thinking. That thinking must include ways to increase sales or revenues. One of the ways to increase revenues must include introducing more new products for sale to customers. The new products account for a large percentage of company ‘s sales (40%), and profits (46%). Of course, not every new product will survive. Two studies listed in Table below provide some statistics.

Progression of New Products Through Development

Table indicates that a large amount of ideas are needed. These ideas are sorted, screened, and evaluated in order to obtain feasible ideas, which enter the development stage, pass into launch stage, and become successful products. Cooper provides more details of how winning products are obtained:

  1. A unique, superior product: This is a product with benefits and value for the customer.
  2. A strong market orientation: An understanding of customer needs and wants exists.
  3. Predevelopment work: Up front activities such as screening, market analysis, technical assessment, market research, and business analysis are vital before development starts.
  4. Good product definition: A company must undertake good product and project definition before development begins.
  5. Quality of execution: The development process has many steps. A company must execute these steps with the proper amount of detail and correctness.
  6. Team effort: Product development is a team effort that includes research & development, marketing, sales, and operations.
  7. Proper project selection: Poor projects must be killed at the proper time. This provides adequate resources for the good projects.
  8. Prepare for the launch: A good product launch is important and resources must be available for future launches.
  9. Top management leadership: Management has a role to play in the product development process. They must provide guidance, strategy, resources, and leadership.
  10. Speed to market: Product development speed is the weapon of choice, but sound management practices should be maintained.
  11. A new product process: This is a screening (stage gate) process for new products.
  12. An attractive market: An attractive market makes it easier to have a successful product.
  13. Strength of company abilities: The new product provides a synergy between the company and internal abilities.

There are many product development processes to choose from. Rosenau suggests that the former “relay race” process (one function passing the product  from marketing to engineering to manufacturing and back through the loop) is obsolete. Multi-functional team activities involving all departments are necessary for effectiveness and speed to market. The process is comprised of 2 parts: a “fuzzy front end” (idea generation and sorting) and new product development (NPD). The complete NPD process includes 5 activities:-

  1. Concept study: A study is needed to uncover the unknowns about the market, technology, and/ or the manufacturing process.
  2. Feasibility investigations: There is a need to determine the limitations of the concept. Find out if the unknowns are resolvable, or if new research improves the project.
  3. Development of the new product: This is the start of the NPD process. This includes the specifications, needs of the customer, target markets, establishment of multi-functional teams, and determination of key stage gates.
  4. Maintenance: These are the post delivery activities associated with product development.
  5. Continuous learning: Project status reports and evaluations are needed to permit learning.

Stage Gate Process

A stage gate process is used by many companies to screen and pass projects as they progress through development stages. Each stage of a project has requirements that must be fulfilled. The gate is a management review of the particular stage in question. It is at the various gates that management should make the “kill” decision. Too many projects are allowed to live beyond their useful lives and clog the system. This dilutes the efforts of project teams and overloads the company resources. Table below illustrates some sample stages.


Product Development Stages for Various Companies

The above Table presents several examples of new product development processes. The individual organization should customize their process and allow a suitable time period for it to stabilize.

Product Development

In the area of new product management, there are  describe some commonly accepted new product terms:

  1. New-to-the-world products: These are inventions and discoveries that include products like Polaroid cameras, laser printers, in-line skates, etc.
  2. New category entries: These are company products that are not new to the world, but new to the company. A “me-too” type product.
  3.  Additions to product lines: These products are extensions of the organization’s existing product line. Examples are Diet Coke, Caffeine-free Coke.
  4. Product improvements: Current products made better.
  5. Repositioning: Products that are retargeted for a new use. The original purpose was not broad enough. Arm & Hammer baking soda has been repositioned as a drain deodorant, refrigerator deodorant, etc.
  6. Cost reductions: New products which are designed to replace existing  products, but at a lower cost.

GE Plastics has formalized their product design development process. It is described as designing for six sigma using the product development process. The methodology is used to produce engineered plastics through a series of tollgates that describe the elements needed for completion of a stage. The best practices are used in each stage. Best practices include:

  • Understanding critical to quality characteristics for external customers and internal customers
  • Conducting failure mode and effects analysis (FMEA)
  • Performing design of experiments to identify key variables
  • Benchmarking other facilities using competitive analysis, surveys, etc.

Treffs, Simon and Shree provide additional insight on the  development  of other six sigma design methods. A standardized approach has not yet been established, but most authors recommend a framework that tries to remove “gut feel” and substitutes more control.


Treffs  presents a four step IDOV model:

  • Identify: Use a team charter, VOC, QFD, FMEA, and benchmarking.
  • Design: Emphasize CTQs, identify functional requirements, develop alternatives, evaluate, and select.
  • Optimize: Use process capability information, statistical tolerancing, robust design, and various six sigma tools.
  • Validate: Test and validate the design.


Simon  provides a five step define, measure, analyze, design and validate (DMADV) process for six sigma design. The DMADV method for the creation of a new product consists of the following steps:

  •  Define: Define the project goals and customer needs
  • Measure: Measure and determine customer needs and specifications
  •  Analyze: Analyze the process options to meet customer needs
  • Design: Develop the process details to meet customer needs
  • Verify: Verify and validate the design performance


The six sigma DMADOV process is used to develop new processes or products at high quality levels, or if a current process requires more than just incremental improvement. DMADOV is an acronym for define, measure, analyze, design, optimize, and verify. The process steps for a DMADOV project include:

  1. Define the project:
    • What are the projects goals?
    • Who is the customer and what are their requirements?
  2. Measure the opportunity:
    • Determine customer needs and specifications
    • Benchmark competitors and industry
  3. Analyze the process options:
    • What option will meet the customer needs?
    •  Determine creative solutions
  4. Design the process:
    • Develop a detailed process
    • Design experiments that verify the design meets customer needs
  5. Optimize the process:
    • Test the new process to evaluate performance levels and impacts
    • Re-design the process, as necessary, to meet customer specifications
  6. Verify the performance:
    • Verify the design performance and ability to meet customer needs
    • Deploy the new process

The French Design Model
The design is named after a British author named Michael Joseph French.


The French Design Model

The designer (and design team) will capture the needs, provide analysis, and produce a statement of the problem. The conceptual design will generate a variety of solutions to the problem. This brings together the elements of engineering, science, practical knowledge, production methods, and practices. Embodiment of schemes step produces a concrete working drawing (or item) from the abstract concept. The detailing step consolidates and coordinates the fine points of producing a product. The designer of a new product is responsible for taking the initial concept to final launch. in this effort, the designer will be part of a team. The project manager, product manager, or general manager for a new product or new design team (which includes marketing, sales, operations, design, and finance) will need to manage the process.

Design for X (DFX)

Design for X (DFX) is defined as a knowledge-based approach for designing products to have as many desirable characteristics as possible. The desirable characteristics include: quality, reliability, serviceability, safety, user friendliness, etc. This approach goes beyond the traditional quality aspects of function, features, and appearance of the item. AT&T Bell Laboratories coined the term DFX to describe the process of designing a product to meet the above characteristics. In doing so, the life cycle cost of a product and the lowering of downstream manufacturing costs would be addressed. The DFX toolbox has continued to grow in number from its inception  to include hundreds of tools today. The user can be overwhelmed by the choices available. Some researchers in DFX technology have developed sophisticated models and algorithms. The usual practice is to apply one DFX tool at a time. Multiple applications of DFX tools can be costly. The authors note that a systematic framework is not yet available for use for DFX methodology. A set methodology would aid in the following ways:

  • Understanding how DFX works
  • Aiding in the selection of a tool
  • Faster learning of DFX tools
  • Providing a platform for multiple DFX tools

Usage of DFX Techniques and Tools

  1. Design guidelines:
    DFX methods are usually presented as rules of thumb (design guidelines). These rules of thumb provide broad design rules and strategies. The design rule to increase assembly efficiency requires a reduction in the part count and part types. The strategy would be to verify that each part is needed.
  2. DFX analysis tools:
    Each DFX tool involves some analytical procedure that measures the effectiveness of the selected tool. For example a DFA (design for assembly) procedure addresses the handling time, insertion time, total assembly time, number of parts, and the assembly efficiency. Each tool should have some method of verifying its effectiveness.
  3. Determine DFX tool structure:
    A technique may require other calculations before the technique can be considered complete. An independent tool will not depend on the output of another tool. The handling analysis, insertion analysis, and number of parts are all capable of being calculated, but the total assembly time requires sub- system times for each component.
  4. Tool effectiveness and context:
    Each tool can be evaluated for usefulness by the user. The tool may be evaluated based on accuracy of analysis, reliability characteristics and/or integrity of the information generated.
  5. The focus of activity and the product development process:
    If the product development process is understood by the design team, the use of the DFX tools will be of benefit. Understanding the process activities will help determine when a particular tool can be used.
  6. Mapping tool focus by level:
    The mapping of a tool by level implies that DFX analysis can be complex. Several levels of analysis may be involved with one individual tool. The structure may dictate the feasibility of tool use. For routine product redesigns, the amount of information needed may already be available. For original designs, the amount of interdependence of tools can make it difficult to coordinate all of the changes downstream.

DFX Characteristics

The following characteristics and attributes should be considered by DFX projects.

  1. Function and performance:  These factors are vital for the product.
  2. Safety: Design for safety requires the elimination of potential failure prone elements that could occur in the operation and use of the product. The design should make the product safe for: manufacture, sale, use by the consumer, and disposal.
  3. Quality: The three characteristics of quality, reliability, and durability are required and are often grouped together in this category.
  4. Reliability: A reliable design has already anticipated all that can go wrong with the product, using the laws of probability to predict product failure. Techniques are employed to reduce failure rates in design testing. FMEA techniques consider how alternative designs can fail. Derating of parts is considered. Redundancy through parallel critical component systems may be used.
  5. Testability: The performance attributes must be easily measured.
  6. Manufacturability: The concept of design for manufacturability (DFM) includes the ability to test and ship a product. Producibility and manufacturability are terms used since the 1960s. Design for manufacturability (DFM) has been the dominant term used since 1985. A design must simplify the manufacture of a product through a reduced number of parts and a reduced number of manufacturing operations.
  7. Assembly (Design for Assembly, DFA): DFA means simplifying the product so that fewer parts are involved, making the product easier to assemble. This portion of DFX can often provide the most significant benefit. A product designed for ease of assembly can: reduce service, improve recycling, reduce repair times, and ensure faster time to market. This is accomplished by using fewer parts, reducing engineering documents, lowering inventory levels, reducing inspections, minimizing setups, minimizing material handling, etc
  8. .Environment: The objective is minimal pollution during manufacture, use, and disposal. This could be defined as Design for the Environment (DFE). The concept is to increase growth without increasing the amount of consumable resources. Some categories of environmental design practices include: recovery and reuse, disassembly, waste minimization, energy conservation, material conservation, chronic risk reduction, and accident prevention.
  9. Serviceability (Maintainability and Reparability): A product should be returned to operation and use easily after a failure. This is sometimes directly linked to maintainability.
  10. Maintainability: The product must perform satisfactorily throughout its intended life with minimal expenses. The best approach is to assure the reliability of components. There should be: reduced down time for maintenance activities; reduced user and technician time for maintenance tasks; reduced  requirements for parts; and lower costs of maintenance. Endres provides some specific methods for increasing maintainability (decreasing diagnosis and repair times): use modular construction in systems, use throw away parts (instead of parts requiring repair), use built-in testing, have parts operate in a constant failure rate mode, etc.
  11. User Friendliness or Ergonomics:  Human factors engineering must fit the product to the human user. Some guidelines to consider are: fitting the product to the user’s attributes, simplifying the user’s tasks, making controls and functions obvious, anticipating human error, providing constraints to prevent incorrect use, properly positioning locating surfaces, improving component accessibility, and identify components.
  12. Appearance (Aesthetics): Attractiveness is especially necessary for consumer products. These characteristics include: special requirements of the user, relevancy of the style, compatibility of materials and form, proportional shapes, or protection from damage in service.
  13. Packaging: The best package for the product must be considered. The size and physical  characteristics of the product are important, as are the economics of the package use. The method of packaging must be determined. Automated packaging methods are desirable.
  14. Features: Features are the accessories, options, and attachments available for a product.
  15. Time to Market: The ability to have shorter cycle times in the launch design of a product is desirable. The ability to produce the product either on time or faster than the competition is a tremendous advantage.

Robust Design and Process

Dr. Genichi Taguchi wrote that the United States has coined the term “Taguchi Methods” to describe his system of robustness for the evaluation and improvement of the product development processes. He has stated that he preferred the term “quality engineering” to describe the process. Other authors have used robust design or robust engineering  to describe the process. Any of the above mentioned terms can be used.

Robust Design Approach


Robust design processes are one of the more important developments in design processes in recent years. The use of robust approaches for design is a process that, when used, can produce extremely reliable designs both during manufacture and in use. Robust design uses the concept of parameter control to place the design in a position where random “noise” does not cause failure. The concept is that a product or process is controlled by a number of factors to produce the desired response. The signal factor is the signal used for the intended response. That is, the actions taken (signal) to start the lawn mower (response) or the dial setting (signal) to obtain a furnace temperature (response). The success of obtaining the response is dependent on control factors and noise factors.

A Robust Design Schematic

Control factors are those parameters that are controllable by the designer. These  factors are the items in the product or process that operate to produce a response when triggered by a signal. For instance, in the case of the furnace, the control factors might be the design of the thermocouple and heat controller. Control factors are sometimes separated into those which add no cost to the product or process and those that do add cost. Since factors that add cost are frequently associated with selection of the tolerance of the components, these are called tolerance factors. Factors that don’t add cost are simply control factors. Noise factors are parameters or events that are not controllable by the designer. These are generally random, in that only the mean and variance can be predicted.
Examples of noise factors in furnace design include:

  • Line voltage variations
  • Outside temperature
  • Parallax errors in dial setting

These noise factors have the ability to produce an error in the desired response. The function of the designer is to select control factors so that the impact of noise factors on the response is minimized while maximizing the response to signal factors. This adjustment of factors is best done using statistical design of experiments or SDE.

 Some of the key principles are concept design, parameter design, and tolerance design.

  1. Concept Design

    Concept design is the selection of the process or product architecture based on technology, cost, customer, or other important considerations. This step depends  heavily on the abilities and creativity of the designer.

  2. Parameter Design

    During the parameter design stage the design is established using the lowest cost components and manufacturing techniques. The response is then optimized for control and minimized for noise. If the design meets the requirements, the designer has achieved an acceptable design at the lowest cost.

  3. Tolerance Design

    If the design doesn’t meet requirements, the designer begins considerations of more expensive components or processes that reduce the tolerances. The tolerances are reduced until the design requirements are met. With robust design approaches, the designer has the ability to produce a design with either the lowest cost, the highest reliability or an optimized combination of cost and reliability.

Example of Robust Design:

A mid-size tile manufacturing company in Japan in 1953 was having a serious problem with their $2 million kiln purchased from West Germany. The problem was extreme variation in the dimensions of the tile produced. The stacked tiles were baked inside a tunnel kiln as shown below. Tiles toward the outside of the stack tended to have a different average dimension and exhibited more variation than those toward the inside of the stack.

1A Schematic of a Tile Tunnel Kiln

The cause of variation was readily understandable. . There was an uneven temperature profile inside the kiln. To correct the cause, the company would have to redesign the kiln, which was a very  expensive proposition. This company’s budget didn’t allow such costly action, but the kiln was creating a tremendous financial loss for the company, so something had to be done. Although temperature was an important factor, it was treated as a noise factor. This meant that temperature was a necessary evil and all other factors would be varied to see if the dimensional variation could be made insensitive to temperature. In Dr. Taguchi’s words, “whether the robustness of the tile design could be improved.” People (the engineers, chemists, etc.) having knowledge about the process were brought together. They brainstormed and identified seven major controllable factors which they thought could affect the tile dimension. These were: (1) limestone content in the raw mix, (2) fineness of the additives, (3) amalgamate content, (4) type of amalgamate, (5) raw material quantity, (6) waste return content, and (7) type of feldspar.

After testing these factors over specified levels using an orthogonal design, the  experimenters discovered that factor #1 (limestone content) was the most significant factor, although other factors had smaller effects. It was found that by increasing the limestone content from 1% to 2% (and by choosing a slightly better level for other factors), the percent warpage could be reduced from 30% to less than 1%. Fortunately, limestone was the cheapest material in the tile mix. Moreover, they found through the experimentation that they could use a smaller amount of amalgamate without adversely affecting the tile dimension. Amalgamate was the most expensive material in the tile. This is a classic example of improving quality (reducing the impact ofa noise factor), reducing cost (using less amalgamate) and drastically reducing the number of defectives at the same time.

Functional Requirements

In the development of a new product, the product planning department must* determine the functions required. The designer (or design engineer) will have a set of requirements that a new product must possess. The designer will develop various concepts, embodiments, or systems that will satisfy the customer’s  requirements. All possible alternative systems should be considered. The alternative systems include existing ones and new not-yet-developed systems. The criteria for selection of a design will be based on the quality level and development costs that will enable
the product to survive in the highly competitive marketplace. The product design must be “functionally robust,” which implies that it must withstand variation in input conditions and still achieve desired performance capabilities. The designer has two objectives:

  1. Develop a product that can perform the desired functions and be robust under various operating or exposure conditions
  2. Have the product manufactured at the lowest possible cost

After selection of the new system, the nominal values and tolerance parameters of the new system must be determined. The optimal solution to the new system is called the “optimal condition” or “optimal design.”

Parameter Design

Parameter designs improve the functional robustness of the process so that the desired dimensions or quality characteristics are obtained. The process is considered functionally robust if it produces the desired part with a wide variety of part dimensions.
The steps to obtain this robustness are:

  1. Determine the signal factors (input signals) and the uncontrollable noise factors (error factors) and ranges.
  2. Choose as many controllable factors as possible, select levels for these factors, and assign these levels to appropriate orthogonal arrays. Controllable factors can be adjusted to different levels to improve the functional robustness of the process.
  3. Calculate S/N ratios from the experimental data. 1
    r is a measurement of the magnitude of the input signals
    Sβ is the sum of squares of the ideal function (useful part)
    Ve is the mean square of nonlinearity
    VN is an error term of nonlinearity and linearity
  4. Determine the optimal conditions for the process. The optimal conditions are derived from the experimental data. The maximum average S/N of each level of controllable factors will be used for the optimal settings. Additional experiments will be conducted for verification of the settings.
  5. Conduct actual production runs.

 Signal-to-Noise Ratio

A signal-to-noise ratio (SIN) is used to evaluate system performance. In assessing the result of experiments, the S/N ratio is calculated at each design point. The combinations of the design variables that maximize the SIN ratio are selected for consideration as product or process parameter settings.


There are 3 cases of S/N ratios:
Case 1: S/N ratio for “smaller is better” used for minimizing the wear, shrinkage, deterioration, etc. of a product or process.
SIN = -10 log (mean-squared response)
Some references use “r” instead of “n” in the equations for Case 1 and Case 2.
Case 2: S/N ratio for “larger is better”:
SIN = -10 log (mean-squared of the reciprocal response)
In this case, S/N ratios will seek the highest values for items like strength, life, fuel efficiency, etc.
Case 3: S/N ratio for “nominal is best”:
This SIN ratio is applicable for dimensions, clearances, weights, viscosities, etc.

Parameter Design Case Study


A case study is taken  to illustrate the parameter design approach. An experiment was conducted to find an assembly method to join an elastomer connector to a nylon tube for use in automotive engine components. The objective was to minimize the assembly effort. There are 4 controllable factors and 3 noise factors. The controllable factors are at 3 levels; the noise factors at 2 levels. This is illustrated in Table below

Parameter Design Case Study Factors

Given 4 factors at 3 levels, this would amount to 81 experiments. Taguchi provided orthogonal arrays to reduce the amount of testing required. They are fractional factorial experiments without regard for interactions, in most cases. An L9 array can be used for the controllable factors with 9 experimental runs. The 3 noise factors are placed in an L8 array. There are 8 runs of noise conditions. This array induces noise into the experiment to help identify the controllable factors that are least sensitive to a change in noise level.


The two arrays are combined to form the complete parameter design layout. The L9 array is called the inner array, while the L8 array is the outer array.Example Orthogonal Design Layout

The completed matrix contains the mean response results. In addition, the variation of the signal-to-noise (S/N) ratio has been determined. The larger the S/N ratio the better. SIN ratios are computed for each of the 9 experimental conditions. An ANOVA can also be used in the calculations to supplement the S/N ratios. Taguchi prefers to use graphing techniques to visually identify the significant factors, without using ANOVA. The optimum combination of factors and levels can be determined from the analysis. A confirmation run should be conducted to verify the results.

The Loss Function


The loss function is used to, determine the financial loss that will occur when a quality characteristic, y, deviates from the target value, m. The quality loss is zero when the quality characteristic, y, is at the target value, m. The quality loss function is defined as the mean square deviation of the objective characteristics from their target values. The function is depicted as:
The function L(y) shows that the further the quality characteristic is away from the target, the greater the quality loss. Of course, at a value outside the tolerance specifications, the product is a defective.
The “A” value is the cost due to a defective product. The amount of deviation from the target, or “tolerance” as Taguchi calls it, is the delta (A) value. The constant k is derived as shown.  The mean square deviation from the target (σ2), as used by Taguchi, does not indicate a variance.

Example of  the Loss Function

 Given that Mr X wished to buy a pair of size 7 shoes. The store was out of size 7 and he had to settle for a pair of 7 and a half (7.5) shoes. After 2 days, he found them to be ill-fitting and had to discard them. The original cost of the shoes was $50. Size 6.5 shoes were also not suitable. The quality loss function can be applied to this situation.


The target value m is 7.0
The existing quality characteristic y is 7.5
The cost of a defective product A is $50.
The hypothetical tolerance (7.5 – 7.0) is 0.5
Solving for the quality loss function:
The above calculations shows the quality loss to be $50. If the shoe size were 7.25, and keeping the other variables the same, the resulting loss to society would be:This quality loss calculation indicates a loss to society of $12.50. The use of the loss function illustrates that there is value in reducing variation in the product.

Tolerance Design


The tolerances for all system components must be determined. This includes the  types of materials used. In tolerance design, there is a balance between a given quality level and cost of the design. The measurement criteria is quality losses. Quality losses are estimated by the functional deviation of the products from their target values plus the cost due to the malfunction of these products. Taguchi  described the approach as using economical safety factors. For a manufacturer, without design responsibility, tolerances will be supplied by its customers. Design responsible indicates that the organization has the authority to change and produce design drawings. Tolerances are usually established by using engineering experience, considering the uncertainty of design and production factors. A safety factor of 4 is typically used in the United States. This safety factor is bound to vary across industry. The defense and communications sectors may require much larger values. The shipping specifications for a product characteristic is said to be on a higher-level in relation to the subsystem and parts. The subsystem characteristic values are also on a higher level in relation to its parts and materials. The functional limit Δ0 must be determined by methods like experimentation and testing. Taguchi uses a LD50 point as a guide to establish the upper and lower functional limits. The LD50 point is where the product will fail 50% of the time. The 50% point is called the median.  An example from Taguchi illustrates the determination of the functional limit:
A spark plug has a nominal ignition voltage of 20 kV. The lower functional limit Δ01, is -12 kV. The upper functional limit Δ02 is +18 kV. These values are determined by testing. The resulting specifications will have a lower tolerance (Δ01) of 8kV and upper tolerance (Δ01) of 38 kV. The relationships between the tolerance specification, the functional limit, and the safety factor are as follows:


The economical safety factor φ is determined as follows:
Given the value of the quality characteristic at y, and the target value at m, the quality loss function will appear as follows:
For example A power supply for a TV set has the functional limits at +/- 25% of output voltage. The average quality loss A0 after shipment of a bad TV is known to be $300. The adjustment of a power supply in-house before shipping is $1.00.  The economical safety factor φ  is calculated as:


The tolerance specification for the output voltage, as a percentage, will be:
Therefore, the tolerance specification for the output voltage of 120 volts will be:

120±(120)(0.0145) = 120 ±  1.74 volts

Although the functional limits were initially established at 120 ±30 volts(25%), the TV sets should have output voltages within 1.74 volts of the nominal.

Taguchi’s Quality Imperatives

  • Robustness is a function of product design. The manufacturing process and on-line quality control cannot do much to change that. Quality losses are a loss to society.
  • Robust products have a strong signal with low internal noise. The design change of increasing the signal-to-noise ratio will improve the robustness of the product.
  • For new products, use planned experiments varying in values, stresses, and conditions to seek out the parameter targets.  Orthogonal arrays are recommended.
  • To build robust products, simulate customer-use conditions.
  • Tolerances are set before going to manufacturing. The quality loss function _ can be measured.
  • Products that barely meet the standard are only slightly better than products that fail the specifications. The aim is for the target value.
  • The factory must manufacture products that are consistent. Reduced variation is needed for consistency.
  • Reducing product failure in the field will reduce the number of defectives in the factory. Part variation reduction decreases system variation.
  • Proposals for capital equipment for on-line quality efforts should have the average quality loss (quality loss function) added to the proposal.

The use of engineering techniques using robust design will improve customer satisfaction, reduce costs, and shorten the development time. The reduction of rework in the development process will get the product to market quicker, and smoother.

Statistical Tolerancing

Statistical tolerancing uses the square root of the sum of variances to determine the tolerances required, when two or more components are assembled. This results in tighter tolerances for the assembly than would be indicated by summing the individual tolerances.
Example: The assignment of tolerances involves many factors including the sigma safety level required. Let’s assume that plus and minus four sigma is necessary and that three components are assembled.One might incorrectly assume that the dimensions of the final assembly would be 30″ ± 0.014″. The nominal thickness is correct, but the variation is incorrect. There are two important forces at work here: random assembly and a normal distribution of variation in each of the parts. The proper tolerance is determined by the additive law of variances. (Variance equals σ2 ).
The final assembly, without special effort, will be: 30″ ±  0.0082″
Compare ± 0.014″ there is a 41% improvement(±  0.0082″). Consider the implications of this difference on the final product and the potential for unnecessary internal  scrap.

Porter’s Five Competitive Forces

Professor Michael Porter of the Harvard Business School developed the five competitive forces as a strategy to analyze the marketplace and to gain a market advantage. He states that a company’s current position is the heart of strategy. The five forces affect most industries. An analyst may have to perform considerable research in order to determine the positioning of any individual company. The five competitive forces are:

  1. The threat of new entrants
  2. The power of suppliers
  3. The power of customers
  4. Substitute products or services
  5. Industry rivalry
  1. The Threat of New Entrants

    The ability of a new competitor to enter into an industrial sector is a major market force that existing companies have to consider. If the barriers are not too difficult, new competitors will bring additional capacity, new or greater resources, and the desire to gain market share. There are six possible barriers to consider:

    1. Economies of scale: The new entrant must be prepared to compete on a large scale. The economies of scale requires very good operational techniques
    2. Product differentiation: If tremendous brand loyalty is a barrier, this may cause new entrants to invest very heavily in methods to counter brand loyalty.
    3. Capital requirements: Large initial investments may be required in facilities inventory, marketing, or R&D in order to compete.
    4. Learning curve advantage: A cost advantage may occur from being further down the learning curve. This advantage is due to elements like accumulated production experience or patents.
    5. Access to distribution channels: Market distribution of the product must be secured in some fashion. The existing distribution channels may be closed or open to new entrants.
    6. Government policy: Regulated industries enjoy some protection from new competitors. Examples include some airlines, coal mining companies, and liquor retailers.
  2. The Power of Suppliers

    Suppliers and customers (buyers) can be considered to be on opposing economic sides. Industrial profits can be affected by the two vying forces if there is an imbalance between them. Some of the factors that make a supplier a powerful force, and potentially difficult to bargain with, include:

    • The industry is dominated by a few companies
    • The supplier has a product or raw material that is unique
    • The product does not have substitutes
    • The supplier has the potential to perform or integrate the service
    • The industry is not important to the supplier
  3. The Power of Customers

    Customers (buyers) are powerful if:

    • Economies of scale matter, and purchases are large
    • The buyer can integrate backwards if needed, keeping costs down
    • The purchased product is a small part of the buyer’s total cost
    • The buyer is in a low profit industry, and must pursue low cost items
    • The product is deemed a commodity
  4. Substitute Products

    A product or industry that has a substitute product will find itself with a cap on potential profits. This can be seen in steel versus aluminum products, corn syrup versus sugar, or fiberglass versus Styrofoam products. Substitute products may be new technologies that have the potential to cause price reductions in the industry.

  5. Industry Rivalry 

    The jockeying among current contestants can be an important factor especially when the rivalry among industry foes is intense. There can be significant price competition, frequent product introductions, and industrial advertising wars. Industry rivalry will have the following characteristics:

    • There are numerous competitors with equal shares
    • There is slow industry growth
    • The product is not easily differentiated (a commodity)
    • There is excessive industry capacity
    • The exit barriers are high (the costs of leaving the industry are very high)
    •  There is intense rivalry
Use of the Five Competitive Forces

An analysis of the five competitive forces may require considerable effort. Professor Porter presents an organized framework to perform the analysis. Once the forces  are identified, an analyst can determine the strengths and weaknesses of a company as it pursues a particular strategy. The company can try to match up its strengths and weaknesses to the current industry model. That is, if the company is not the low cost producer, it will not try to have a price war with the industry’s low cost producer, unless it has long staying power. The company might also try to position itself in a quadrant where the forces are weakest, and where higher profit opportunities might exist. Porter maintains that an effective competitive strategy will allow a company to be proactive in its actions toward creating a defendable position against competition. The company can position itself in a certain segment buffeted by its capabilities and resources. It can also try to reduce or influence certain competitive forces in the industry. Finally, the analysis can help the company anticipate shifts in the underlying forces and to take advantage of business opportunities.NA

Portfolio Architecting

Technical processes include technology portfolio architecting, research and technology development (R&TD), product commercialization, and post-launch engineering work. The older approach used DMAIC six sigma and lean methods to correct problems and increase flow in existing technical processes, which provided quick, “emergency” actions. The new approach involves enabling and enhancing technical processes to prevent problems before they become an issue. This uses  six sigma on a sustained basis to become consistent and predictable at conducting value-adding tasks. Inbound R&TD is focused on strategic technology portfolio definition, development, optimization, and transfer. Inbound product design engineering is focused on tactical product commercialization to rapidly prepare a new design, which often possesses transferred, new technology to fulfill launch requirements. Outbound post-launch engineering is focused on operations in post-launch production and service engineering support. Service engineering professionals often function as a “reactionary force” to fix problems. Instead, the focus should be on planning engineering changes and upgrades to increase profit margins. Newly transferred technology is frequently immature resulting in a delay in the delivery of new products. Executives want an orderly design and launch of new product lines. If the product portfolio and technology needed to enable it are not linked and timed for integration, the work of executing the new portfolio cannot happen on time. There is a need to design a strong, strategic alignment between product and technology portfolio architecting tasks for the sake of downstream % cycle-time efficiency and control. The product and technology portfolio renewal process is the first of two strategic processes in which research and development (R&D) professionals can use six sigma methods. The second process is the formal development of new technologies that the product and technology portfolio process requires. 

Strategic to Tactical Workflow


The strategic component consists of the inbound technical processes, research and technology development; and the tactical component is product design engineering done during commercialization.
Figure below shows the integrated marketing and technical functions that reside within the inbound and outbound technical areas.

Process Linkage Diagram

To enable growth, marketing and technical processes and functions must be linked for six sigma in marketing, R&TD, and design. Integrated, multi-functional teams from inbound marketing, R&TD, design, and production/service support engineering  must be used across all three process areas to develop and share data, to manage risk and to make decisions. The lDEA process for product portfolio definition and development consists of the following phases: 

  • Identify markets, their segments, and opportunities using technology benchmarking and road mapping
  • Define portfolio requirements and product architectural alternatives
  • Evaluate product alternatives against competitive portfolios, then select
  • Activate ranked and resourced individual product commercialization projects

With statistically significant data, differentiated needs between the market segments within the general markets may be defined. Diverse new, unique, and difficult (NUD) needs may be translated into a set of product portfolio requirements that possess common and differentiated targets and fulfillment ranges. These requirements drive the product portfolio architecting process. Innovation at this level is the most strategic form of creativity and idea generation that a company can conduct. The define phase is the key transfer point for delivering product portfolio requirements to the R&TD organization. R&TD receives these diverse requirements and translates them into technology requirements. With several alternative product portfolio architectures defined, the team enters the , evaluate phase. This phase involves the data-driven evaluation of the candidate portfolio architectures against competitive benchmarks in light of the portfolio requirements. A superior hybrid portfolio architecture emerges from this process phase. The final phase of P&TPR is to activate product commercialization projects out of the superior portfolio architecture. The focus here is on activating projects that will, in the first phase of commercialization, convert opportunities into specific product requirements and ideas into specific product concepts.

Set-Based Design

Set-based concurrent engineering (SBCE) design begins with broad sets of possible solutions, converging to a narrow set of alternatives and then to a final solution. Design teams from various functions can work sets of solutions in parallel, gradually narrowing sets of solutions. Information from development, testing, customers, and others will help narrow the decision sets. Sets of ideas are viewed and reworked leading to more robust, optimized, and efficient projects. This approach is deemed to be more efficient than working with one idea at a time. An analogy to set-based concurrent design is the 20 questions game. A player will be asked to identify an unknown object or problem. The player trying to seek the answer will have only 20 questions to ask. The experienced player will use a series of broad questions to narrow the scope of the field of possibilities. Questions that define animal, vegetable, or mineral will eliminate quite a few possibilities quickly. SBCE seeks to narrow the scope of design in a more efficient and robust manner. Toyota is the only company using practices consistent with SBCE. SBCE assumes that reasoning and communicating about sets of ideas is preferable to working with one idea at a time.

Principles of SBCE

  1. Define the feasible regions
  2.  Communicate sets of possibilities
  3.  Look for intersections
  4. Explore trade-offs by designing multiple alternatives
  5. Impose minimum constraint
  6. Narrow sets smoothly, balancing the need to learn and the need to decide
  7. Pursue high-risk and conservative options in parallel
  8. Establish feasibility before commitment
  9. Stay within sets once committed
  10. Control by managing uncertainty at process gates
  11. Seek solutions robust to physical, market, and design variation

Theory of Inventive Problem-Solving (TRIZ)

TRIZ is a Russian abbreviation for “the theory of inventive problem solving.”. Altshuller states that inventiveness can be taught. Creativity can be learned, it is not innate, one does not have to be born with it. Altshuller asserts that traditional inventing is “trial and error” resulting in much wasted time, effort, and resources. Through his years of education and imprisonment, he solidified a theory that one solves problems through a collection of assembled techniques. Technical evolution and invention have certain patterns. One should be knowledgeable with them to solve technical problems. There is some common sense, logic, and use of physics in problem solving.
There are three groups of methods to solve technical problems:

  1. Various tricks (a reference to a technique)
  2. Methods based on utilizing physical effects and phenomena (changing the  state of the physical properties of substances)
  3. Complex methods (combination of tricks and physics)

Altshuller provides an introduction to ARIZ (algorithm to solve an inventive problem). This is a sequence of 9 action steps in the use of TRIZ. The steps are:

  • Analysis of the problem
  • Analysis of the problem’s model: Use of a block diagram defining the “operating zone”
  • Formulation of the ideal final result (IFR): Providing a description of the final result, which will provide more details
  • Utilization of outside substances and field resources
  • Utilization of an informational data bank: Determining the physical or chemical constraints (standards) of the problem
  • Change or reformulate the problem .
  • Analysis of the method that removed the physical contradiction: Is a quality solution provided?
  • Utilization of the found solution: Seeking side effects of the solution on the system or other processes
  • Analysis of the steps that lead to the solution: An analysis may prove useful later

Initially, there were 27 TRIZ tools  which were later expanded to 40 innovative, technical tools. The list of the 40 principles is:

  • Segmentation
  • Partial or excessive action
  • Extraction
  • Transition into a new dimension
  • Local quality
  • Mechanical vibration
  •  Asymmetry
  • Periodic action
  • Consolidation
  • Continuity of useful action
  • Universality
  • Rushing through
  • Nesting
  • Convert harm into benefit
  •  Counterweight
  • Replacement of mechanical systems
  • Prior counteraction
  • Pneumatic or hydraulic construction
  •  Prior action
  • Flexible membranes or thin films
  • Cushion in advance
  • Porous material
  • Equipotentiality
  • Changing the color
  • Do it in reverse
  • Homogeneity
  •  Feedback
  • Rejecting or regenerating parts
  • Mediator
  • Transformation of properties
  • Self-service
  • Phase transition
  • Copying
  • Thermal expansion
  • Dispose
  • Accelerated oxidation
  • Spheroidality
  • Inert environment
  • Dynamicity
  • Composite materials

Systematic Design

Systematic design is a step-by-step approach to design. It provides a structure to the design process using a German methodology. It is stated that systematic design is a very rational approach and will produce valid solutions. The authors who describe this approach detail a method that is close to the guidelines as written by the German design standard: Guideline VDI 2221 (“Systematic Approach to the Design of Technical Systems and Products” through the Design Committee of the VDI: Verein Deutscher Ingenieure).
Pahl  presents four main phases in the design process:

  • Task clarification: collect information, formulate concepts, identify needs
  • Conceptual design: identify essential problems and sub-functions
  • Embodiment design: develop concepts, layouts, refinements
  • Detail design: finalize drawings, concepts and generate documentation

An abstract concept is developed into a concrete item, represented by a drawing. Synthesis involves search and discovery, and the act of combining parts or elements to produce a new form. Modern German design thinking uses the following structure:

  • The requirements of the design are determined
  • The appropriate process elements are selected
  • A step-by-step method transforms qualitative items to quantitative items
  • A deliberate combination of elements of differing complexities is used

The main steps in the conceptual phase:

  • Clarify the task
  • Identify essential problems
  • Establish function structures
  • Search for solutions using intuition and brainstorming
  • Combine solution principles and select qualitatively
  •  Firm up concept variants: preliminary calculations, and layouts
  • Evaluate concept variants

There are suggested tools and methods for various steps along the design process. The creativity of the designer is encouraged in this method, but on a more structured basis. Any and all design methods must employ the designer’s creativity to find new innovative solutions.

Critical Parameter Management

Critical parameter management (CPM) is a:

  • Disciplined methodology for managing, analyzing, and reporting technical  product performance.
  • Process for linking system parameters for i ‘sensitivity analysis and optimization of critical performance factors.
  • Strategic tool for improving product development by integrating systems, software, design, and manufacturing activities.

CPM program benefits include:

  1. Facilitated analysis
    • Statistical modeling & optimization of the performance-cost trade-off
    • Real-time system-level sensitivity analysis
    • Connects analyses between system, subsystem and component levels
  2. Improved collaboration
    • Shares technical analysis and knowledge
    • Links ownership to parameters
    • Connects teams and parameters to understand flow-down of requirements
    • Captures and leverages invested intellectual capital for future business
  3. Streamlined reporting
    • Total Property Management (TPM) design margins are statistically tracked over product lifecycle
    • Automated, real-time TPM data gathering I report generation
    • Reconciliation of requirement allocation and engineering design capability

The proper place to initiate critical parameter management in a business is during advanced product portfolio planning, and research and technology development (R&TD). At these earliest stages of product development, a certified portfolio of critical functional parameters and responses can be rapidly transferred as a family of modular designs in the product commercialization program. Critical parameter management is a systems-engineering and integration process that is used within an overarching technology development and product commercialization roadmap. The I2DOV road map defines a generic technology development process approach to research and technology development which consists of the following phases:

  •  I2 = Invention and Innovation
  • D = Develop technology
  • O = Optimization of the robustness of the baseline technologies
  • V = Verification of the platform or sublevel technologies

Critical parameter management derives from a carefully defined architectural flow down of requirements that can be directly linked to functions that are engineered to flow up to fulfill the requirements,  Customer needs drive system-level technical requirements, which drive the system-level engineering functions, which in turn, drive the definition of the system-level architectural concepts. When a system architecture is estimated from this perspective, the inevitable trade-offs due to subsystem, subassembly, and component architectures begin.


Critical Parameter Management Model

Pugh Analysis

Stuart Pugh, former Professor of Engineering Design, University of Strathclyde, Glasgow, Scotland, (now deceased) was a leader in product development (total design) methodology. He was a practicing engineer in industry before turning to the academic world. His work provides a methodology for product conception and generation.  Quality function deployment can be used to determine customer technical requirements. This provides the starting point necessary to develop new products. Pugh suggests a cross functional team activity to assist in the development of improved concepts. The process starts with a set of alternative designs. These
early designs come from various individuals in response to the initial project charter. A matrix-based process is used to refine the concepts. During the selection process, additional new concepts are generated. The final concept will generally not be the original concept.
The Pugh concept selection process has 10 steps: .

  1. Choose criteria: The criteria comes from the technical requirements.
  2. Form the matrix: An example matrix is shown below.
  3. Clarify the concepts: The team members must be sure that all of the concepts are understood. New concepts may require a sketch for visualization.
  4. Choose the datum concept: Select a design that is among the best concepts available for the baseline (datum).
  5. Run the matrix: Comparisons are made on every concept compared to the datum. Use a simple scale to rate the concepts. “A+” can be used for a better concept. “A-” for a worse design, and a “s” for a same design.
  6. Evaluate the ratings: Add up the scores for each category. See what the positives will contribute to one’s insight of the design.
  7. Attack the negatives and enhance the positives: Actively discuss the most promising concepts. Kill or modify the negative ones.
  8. Select a new datum and rerun the matrix: A new, hybrid can be entered into the matrix for consideration.
  9. Plan further work: At the end of the first working session, the team may gather more information, perform experiments, seek technical knowledge, etc.
  10. Iterate to arrive at a new winning concept: Return the team to work on the concepts. Rerun the matrix for further analysis as needed.

Example of a Pugh Evaluation Matrix

The Pugh concept selection method has proven to be successful in the product  development process. The team will acquire:

  • Better insight on the requirements
  • Better understanding of the design problems
  • Greater understanding of the potential solutions
  • Greater understanding of the iteration of concepts
  • More insight on why certain designs are stronger than others
  • The desire to create additional concepts

Back to Home Page

Leave a Reply