Step 1 : Planning and Preparation
1.1 Purpose
The purpose of the Design FMEA Planning and Preparation Step is to define which FMEAs will be done for a project, and to define what is included and excluded in each FMEA based on the type of analysis being developed. i.e.. system, subsystem or component. The main objectives of Design FMEA Planning and Preparation are:
- Project identification
- Project plan: lnTent, Timing, Team, Tasks, Tools (5T)
- Analysis boundaries: What is included and excluded from the analysis
- Identification of baseline FMEA with lessons learned
- Basis for the Structure Analysis step
1.2 DFMEA Project Identification and Boundaries
DFMEA Project identification includes a clear understanding of what needs to be evaluated. This involves a decision-making process to define the DFMEAs that are needed for a customer program. What to exclude can be just as important as what to include in the analysis. Below are some basic questions that help identify DFMEA projects.
- What is the customer buying from us?
- Are there new requirements?
- Does the customer or company require a DFMEA?
- Do we make the product and have design control?
- Do we buy the product and still have design control?
- Do we buy the product and do not have design control?
- Who is responsible for the interface design?
- Do we need a system. subsystem. component. or other level of analysis?
Answers to these questions and others defined by the company help create the list of DFMEA projects needed. The DFMEA project list assures consistent direction, commitment and focus. The following may assist the team in defining DFMEA boundaries as applicable:
- Legal requirements
- Technical requirements
- Customer wants/needs/expectation (external and internal customers)
- Requirements specification
- Diagrams (Block/Boundary) from similar project
- Schematics. drawings, and/or 3D models
- Bill of materials (BOM). risk assessment
- Previous FMEA for similar products
- Error proofing requirements, Design for Manufacturability and Assembly (DFMEA)
- QFD Quality Function Deployment
The following may be considered in defining the scope of the DFMEA as appropriate:
- Novelty of technology/ degree of innovation
- Quality/reliability history (in-house. zero mileage, field failures, warranty and policy claims for similar product)
- Complexity of design
- Safety of people and systems
- Cyber-physical system (including Cyber security)
- Legal compliance
- Catalog & standard parts
1.3 DFMEA Project Plan
A plan for the execution of the DFMEA should be developed once the DFMEA project is known. It is recommended that the 5T method (lnTent, Timing, Team,Tasks, Tool) be used. The plan for the DFMEA helps the company be proactive in starting the DFMEA early. The DFMEA activities (5-Step process) should be incorporated into the overall project plan.
1.4 Identification of the Baseline DFMEA
Part of the preparation for conducting the DFMEA is knowing what information is already available that can help the cross-functional team. This includes use of a foundation DFMEA , similar product DFMEA, or product family DFMEA. The family DFMEA is a specialized foundation design FMEA for products that generally contain common or consistent product boundaries and related functions. For a new product in the family, the new project specific components and functions to complete the new product’s DFMEA would be added to the family FMEA. The additions for the new product may be in the family DFMEA itself, or in a new document with reference to the original family or foundation DFMEA If no baseline. is available, then, the team will develop a new DFMEA.
1.5 DFMEA Header
During the Planning and Preparation Step, the header of the DFMEA document should be filled out. The header may be modified to meet the needs of the organization. The header includes some of the basic DFMEA scope information as follows:
- Company Name: Name of Company Responsible for DFMEA
- Engineering Location: Geographical Location
- Customer Name: Name of Customer(s) or Product
- Model Year / Program(s): Customer Application or Company Model /Style
- Subject: Name of DFMEA Project (System, Subsystem and/or Component)
- DFMEA Start Date: Start Date
- DFMEA Revision Date: Latest Revision Date.
- Cross-Functional Team: Team Roster needed
- DFMEA ID Number: Determined by Company.
- Design Responsibility: Name of DFMEA owner
- Confidentiality Level: Business Use, Proprietary, Confidential

1.6 Basis for Structure Analysis
The information gathered during Step 1 Planning and. Preparation will be used to develop Step 2 Structure Analysis.
Step 2 : Structure Analysis
2.1 Purpose
The purpose of Design Structure Analysis is to identify and breakdown the FMEA scope into system, subsystem, and component parts for technical risk analysis. The main objectives of a Design Structure Analysis are:
- Visualization of the analysis scope
- Structure tree or equivalent: block diagram, boundary diagram, digital model, physical parts
- Identification of design interfaces, interactions, close clearances
- Collaboration between customer and supplier engineering teams (interface responsibilities)
- Basis for the Function Analysis step
2.2 System Structure
A system structure is comprised of system elements. Depending on the scope of analysis, the system elements of a design structure can consist of a system, subsystems, assemblies. and components. Complex structures may be split into several structures (work packages) or different layers of block diagrams and analyzed separately for organizational reasons or to ensure sufficient clarity. A system has a boundary separating it from other systems and the environment. Its relationship with the environment is defined by inputs and outputs. A system element is a distinct component of a functional item. not a function, a requirement or a feature.
2.3 Define the Customer
There are two-major customers to be considered in the FMEA analysis:
- END USER: The individual who uses a product after it has been fully developed and marketed.
- ASSEMBLY and MANUFACTURING: the locations where manufacturing operations (e.g., powertrain, stamping and fabricating) and vehicle! product assembly and production material processing takes place. Addressing the interfaces between the product and its assembly process is critical to an effective FMEA analysis. This may be any subsequent or downstream operation or a next Tier manufacturing process.
Knowledge of these customers can help to define the functions, requirements and specifications more robustly as well as aid in determining the effects of related failure modes.
2..4 Visualize System Structure
A visualization of the system structure helps the DFMEA team develop the structural analysis. There are various tools which may be used by the team to accomplish this. Two methods commonly used
- Block/Boundary Diagrams
- Structure Tree
2.4.1 Block/Boundary Diagram
Block/Boundary Diagrams are useful tools that depict the system under consideration and its interfaces with adjacent systems, the environment and the customer. The diagram is a graphic
representation that provides guidelines for structured brainstorming and facilitates the analysis of system interfaces as a foundation for a Design FMEA. The diagram below shows the physical and logical relationships between the components of the product. It indicates the interaction of components and subsystems within the scope of the design as well as those interfaces to the product Customer, Manufacturing, Service,Shipping. etc. The diagram identifies persons and things that the design interacts with during its useful life. The Boundary Diagram can be used to identify the Focus Elements to be assessed in the Structure Analysis and Function Analysis. The diagram may be in the form of boxes connected by lines, with each box corresponding to a major component of the product. The lines correspond with how the product components are related to, or interface with each other, with arrows at the end point(s) to indicate the direction of flow. Interfaces between elements in the Boundary Diagram can be included as Focus Elements in the Structure and Function Analysis Structure Tree. There are different approaches and formats to the construction of a Block/Boundary Diagram, which are determined by the organization. The terms “Block Diagram” and “Boundary Diagram” are used interchangeably. However. the Boundary Diagram tends to be more comprehensive due to the inclusion of external influences and system interactions. In the context of the DFMEA, Block/boundary Diagrams define the analysis scope and responsibility and provides guidelines for structured brainstorming. The scope of analysis is defined by the boundaries of the system; however. interfaces with external factors/systems are to be addressed.
- Defines scope of analysis (helps to identify potential team members)
- Identifies internal and external interfaces
- Enables application of system, sub-system. and component hierarchy
When correctly constructed, Block/Boundary Diagrams provide detailed information to the P-Diagram and the FMEA. Although Block/Boundary diagrams can be constructed to any level of
detail, it is important to identify the major elements, understand how they interact with each other, and how they may interact with outside systems. Block/boundary Diagrams are steadily refined as the design matures. The steps involved in completing a Block/boundary Diagram may be described as follows:
- Describe components and features
- Naming the parts and features helps alignment within the team, particularly when features have ‘“nicknames’
- All system components and interfacing components shown
- Reorganize blocks to show linkages :
- Solid line for direct contact
- Dashed line for indirect interfaces. e.g. clearances or relative motion
- Arrows indicate direction
- All energy flows /signal or force transfers identified.
- Describe connections
- Consider all types of interfaces, both desired and undesired:
- P – Physically touching (welded. bolted. clamped, etc.)
- E – Energy transfer (Torque (Nm), heat. etc.)
- I-information transfer (ECU, sensors, signals, etc.)
- M-Material exchange (Cooling fluid, exhaust gases,etc.)
- Consider all types of interfaces, both desired and undesired:
- Add interfacing systems and inputs (persons and things). The following should be included:
- Adjacent systems— including systems that are not physically touching your system but may interact with it. require clearance, involve motion. or thermal exposure.
- The customer/or end user
- Arrows indicate direction
- Define the boundary (What parts are within “the span of control of the team? What is new or modified?). Only parts designed or controlled by the team are inside the boundary. The blocks within the boundary diagram are one level lower than the level being analyzed. Blocks within the boundary may be marked to indicate items that are not part of the analysis.
- Add relevant details to identify the diagram.
- System, program and team identification
- Key to any colors or line styles used to identify different types of interactions
- Date and revision level

2.4.2 Interface Analysis
An interface analysis describes the. interactions between elements of a system. There are five primary types of interfaces:
- Physical connection (e.g., brackets, belts, clamps and various, types of connectors)
- Material exchange (e.g., compressed air, hydraulic fluids or any other fluid or material exchange)
- Energy transfer (e.g., heat transfer, friction or motion transfer such as chain links or gears)
- Data exchange (e.g., computer inputs or outputs, wiring- harnesses, electrical signals or any other types of information exchange, cyber security items)
- Human-Machine (e.g., controls, switches, mirrors, displays, warnings, seating, entry/exit)
Another type of interface may be described as a physical clearance between parts, where there is no physical connection. Clearances may be static and/or dynamic. Consider the interfaces between subsystems and components in addition to the content of the sub-systems and components themselves. An interface analysis documents the nature (strong/weak/none/beneficial/harmful) and type of relationships (Physical, Energy, Information or Material Exchange) that occur at all internal and external interfaces graphically displayed in the Block/Boundary Diagram. Information from an interface analysis provides valuable input to a Design FMEA, such as the primary functions or interface functions to be analyzed with potential causes/mechanisms of failure due to effects from neighboring systems and environments. Interface analysis also provides input to the P-Diagram on ideal functions. and noise factors.
2.4.3 Structure Trees
The structure tree arranges system elements hierarchically and illustrates the dependency via the structural connections. The clearly structured illustration of the complete system is thereby guaranteed by the fact that each system element exists only once to prevent redundancy. The structures arranged under each System Element are independent sub-structures. The interactions between System elements may be described as functions and represented by function nets. There is always a system element present even if it is only derived from the function and cannot yet be specified more clearly.

Structural Analysis (Step 2) | ||
1. Next Higher Level | 2. Focus Elements | 3. Next Lower Level or Characteristic Type |
Window Lifter Motor | Commutation System | Brush Card Base Body |
- Next Higher Level: The highest level of integration. within the scope of analysis.
- Focus Element: The element in focus. This is the item that is topic of Consideration of the failure chain.
- Next Lower Level or Characteristic Type:The element that is the next level down the structure from the focus element.
2.5 Collaboration between Customer and Supplier
The output of the Structure Analysis (visualization of the design and its interfaces) provides a tool for collaboration between customers and suppliers during technical reviews of the design and/or DFMEA project.
2.6 Basis for Function Analysis.
The information defined during Step 2 Structure Analysis will be used to develop Step 3 Function Analysis. If design elements (items) are missing from the Structure Analysis they will also be missing from the Function Analysis.
Step 3: Function Analysis
3.1 Purpose
The purpose of the Design Function Analysis is to ensure that the functions specified by requirements/specifications are appropriately allocated to the system elements. Regardless of the
tool used to generate the DFMEA, it is critical that the analysis is written in functional terms. The main objectives of a Design Function Analysis are:
- Visualization of product or process functions
- Function tree/net or function analysis form sheet and parameter diagram (P-diagram)
- Cascade of customer (external and internal) functions with associated requirements
- Association of requirements or characteristics to functions
- Collaboration between engineering teams (systems, safety, and components)
- Basis for the Failure Analysis step
The structure provides the basis so that each System Element may be individually analyzed with regard to its functions and requirements. For this, comprehensive knowledge of the system and the operating conditions and environmental conditions of the system are necessary, for example, heat, cold, dust, splash water, salt, icing, vibrations. electrical failures. etc.
3.2 Function
A function describes what the item/system element is intended to do. A function is to be assigned to a system element. Also, a system element can contain multiple functions. The description of a function needs to be clear. The recommended phrase format is to use an “action verb“ followed by a “noun” to describe a measurable function. A Function should be in the “PRESENT TENSE”; it uses the verb’s base form (e.g., deliver, contain, control, assemble, transfer).
Examples: deliver power, contain fluid, control speed, transfer heat, color black. Functions describe the relationship between the input and output of an item system element with the aim of fulfilling a task.
Note: A component (i.e., a part or item in a part list) may have a purpose/ function where there is no input/output. Examples such as a seal, grease, clip, bracket, housing, connector, flux etc. have functions and requirements including material, shape, thickness, etc. In addition to the primary functions of an item, other functions that may be evaluated include secondary functions such as interface functions, diagnostic functions, and serviceability functions

3.3 Requirements
Requirements are divided into two groups: functional requirements and non-functional requirements. A functional requirement is a criterion by which the intended performance of the function is judged or measured (e.g., material stiffness). A non-functional requirement is a limitation on the freedom for design decision (e.g., temperature range). Requirements may be derived from various sources, external and internal, these could be:
Legal requirements: for eg Environmentally friendly product design, suitable for recycling, safe in the event of potential misuse by the operator. non-flammable, etc.
Industry Norms and Standards: for eg ISO 9001, VDA Volume 6 Part 3, Process audit, SAE J1739, ISO 26262 Functional Safety.
Customer Requirements:Explicit (i.e., in customer specification) and implicit (i.e., freedom from prohibited materials) – under all specified conditions
Internal Requirements:Product Specific (i.e., Requirements Specifications, manufacturability, suitability for testing, compatibility with other existing products, reusability, cleanliness, generation, entry and spreading of particles)
Product Characteristics: A distinguishing feature (or quantifiable attribute) of a product such as a journal diameter or surface finish.
3.4 Parameter Diagram (P-Diagram)
Parameters are considered to be attributes of the behavior of a function. A Parameter (P) Diagram is a graphical representation of the environment in which an item exists. A P-Diagram includes factors which influence the transfer function between inputs and outputs, focusing on design decisions necessary to optimize output. A P-Diagram is used to characterize the behavior of a system or component in the context of a single function. P-Diagram-s are not required for all functions. Teams should focus on a few key functions affected by new conditions and those with history of robustness issues in previous applications. More than one P- Diagram may be needed in order to illustrate the function(s) of the system or component that are of concern to the FMEA Team.
The complete functional description forms the basis for subsequent failure analysis and risk mitigation. A P-Diagram focuses on achievement of function. It clearly identifies all influences on that function including what can be controlled (Control Factors), and what cannot reasonably be controlled (Noise Factors). The P-Diagram. completed for specific Ideal Functions. assists in the identification of:
- Factors, levels. responses and signals necessary for system optimization
- Functions which are inputs to the DFMEA
- Control and Noise factors which could affect functional performance
- Unintended system outputs (Diverted Outputs)
- Information gained through developing a P-Diagram provides input to the test plan.
Referring to Figure below, the output (grey area) of the item/ System Element often deviates/varies from the desired behavior (straight line). The control factors act on the design to achieve as close as practical to the desired behavior.

A Parameter Diagram consists of dynamic inputs (including signals), factors that could affect system performance (control and noise), sources of variation, and outputs (intended outputs and
unintended/diverted outputs). The following is an example of a Parameter Diagram which is used to assess the influences on a function of a product including:
- Input (What you want to put in to get the desired result) is a description of the sou fees required for fulfilling the system functionality.
- Function (What you want to happen) is described in a Parameter Diagram with an active verb followed by a measurable noun in the present tense and associated with requirements.
- Functional Requirements (What you need to make the function happen) are related to the performance of a function
- Control Factors (What you can do to make it happen) which can be adjusted to make the design more insensitive to noise (more robust) are identified. One type of Control Factor is a Signal Factor. Signal Factors are adjustment factors, set directly or indirectly by a user of a system, that proportionally change the system response (e.g., brake pedal movement changes stopping distance). Only dynamic systems utilize signal factors. Systems without signal factors are called static systems.
- Non-Functional Requirements (What you need beside the functional requirements) which limit the design option.
- Intended Output (What you want from the system) are ideal, intended functional outputs whose magnitude may (dynamic system) or may not (static system) be linearly proportional to a signal factor (e.g., low beam activation for a headlamp. stopping distance as a function of brake pedal movement).
- Unintended Output (What you don’t want from the system) are malfunctioning behaviors or unintended system outputs that divert system performance from the ideal intended function. For example, energy associated with a brake system is ideally transformed into friction. Heat, noise and vibration are examples of brake energy diverted outputs. Diverted Outputs may be losses to thermal radiation, vibration, electrical resistance, flow restriction. etc.
- Noise Factors (What interferes with achieving the desired output) are parameters which represent potentially significant sources of variation for the system response and cannot be controlled or are not practical to control from the perspective of the engineer. Noises are described in physical units. Noise factors are categorized as follows:
- Piece to Piece Variation (in a component and interference between components)
- Change Over Time (aging over life time. e.g.. mileage, aging, wear)
- Customer Usage (use out of desired specifications)
- External Environment (conditions during customer usage.e.g. road type. weather)
- System Interactions(interference from other systems)

3.5 Function Analysis
The interactions of the functions of several System elements are to be demonstrated. for example as a function tree/network or using the DFMEA form sheet. The focus of the analysis cascades from OEM to Tier 1 supplier to Tier N supplier. The purpose of creating a function tree/network or function analysis on the DFMEA form sheet is to incorporate the technical dependency between the functions. Therefore. it subsequently supports the visualization of the failure dependencies. When there is a functional relationship between hierarchically linked functions. Then there is a potential relationship between the associated failures. Otherwise. if there is no functional relationship between hierarchically linked functions, there will also be no potential relationship between the associated failures. For the preparation of the function tree/network. the functions that are involved need to be examined. Sub-functions enable the performance of an overall function. All sub-functions are linked logically with each other in the function structure (Boolean & relationships). A function structure becomes more detailed from top down. The lower level function describes how the higher level function is to be fulfilled. For the logical linking of a function structure, it is helpful to ask:
- “How is the higher level function enabled by lower level functions?” (Top-Down) and
- “Why is the lower level function needed?” (Bottom-Up).

The function structure can be created in the Function Analysis section:
FUNCTION Analysis (Step 3) | ||
1.Next Higher Level Function and Requirement | 2. Focus Elements Function and Requirement | 3. Next Lower Level Function and Requirement or Characteristic |
Convert electrical energy into mechanical energy according to parameterization | Commutation System transport the electrical current between coil pairs of the electromagnetic convertors | Brush card body transports forces between spring and motor body to hold the brush spring system in x. y, z position (support commutating contact points) |
How is the higher level function enabled by lower level functions?
- Next Higher Level Function and Requirement: The function in scope of the Analysis.
- Focus Element Function and Requirement: The function of the associated System Element (item in focus) identified in the Structure Analysis.
- Next Lower Level Function and Requirement of Characteristic:The function of the associated Component Element identified in the Structure Analysis.
3.6 Collaboration between Engineering Teams (Systems, Safety, and Components)
Engineering teams within the company need to collaborate to make sure information is consistent for a project or customer program especially when multiple DFMEA teams are simultaneously conducting the technical risk analysis. For example, a systems group might be developing the design architecture (structure) and this information would be helpful to the DFMEA to avoid duplication of work. A safety team may be working with the customer to understand the safety goals and hazards. This information would be helpful to the DFMEA to ensure consistent severity ratings for failure effects.
3.7 Basis for Failure Analysis
Complete definition of functions (in positive words) will lead to a comprehensive Step 4 Failure Analysis because the potential failures are ways the functions could fail (in negative words).
Step 4 Failure Analysis
4.1 Purpose
The purpose of the Design Failure Analysis is to identify failure causes, modes, and effects, and show their relationships to enable risk assessment. The main objectives of a Design Failure Analysis are:
- Establishment of the Failure Chain
- Potential Failure Effects, Failure Modes. Failure Causes for each product function.
- Collaboration between customer and supplier (Failure Effects)
- Basis for the documentation of-failures in the FMEA form sheet and the Risk Analysis step
4.2 Failures
Failures of a function are derived from the function descriptions. There are several types of- potential failure modes including, but not limited to:
- Loss of function (e.g. inoperable, fails suddenly)
- Degradation of function (e.g. performance loss overtime)
- Intermittent function (e.g. operation randomly starts/stops/starts)
- Partial function (e.g. performance loss)
- Unintended function (e.g. operation at the wrong time, unintended direction, unequal performance)
- Exceeding function (e.g. operation above acceptable threshold)
- Delayed function (e.g. operation after unintended time interval)

The description of a system and subsystem failure mode is described in terms of functional loss or degradation e.g., steering turns right when the hand wheel is moved left. as an example of an unintended function. When necessary, the operating condition of the vehicle should be included e.g. loss of steering assist during start up or shut down. A component/part failure mode is comprised of a noun and a failure description e.g., seal twisted. It is critical that the description of the failure is clear and understandable for the person who is intended to read it. A statement “not fulfilled,” “not OK,” “defective.” “broken“ and so on is not sufficient. More than one failure may be associated with a function. Therefore. the team should not stop as soon as one failure is
identified. They should ask “how else can this fail?“

4.3 The Failure Chain
There are three different aspects of a Failure analyzed in an FMEA:
- Failure Effect (FE)
- Failure Mode (FM)
- Failure Cause (FC)

4.4 Failure Effects
A Failure Effect is defined as the consequence of a failure mode. Describe effects on the next level of product integration (internal or external), the end user who is the vehicle operator (external), and government regulations (regulatory) as applicable. Customer effects should state what the user might notice or experience including those effects that could impact safety. The intent is to forecast the failure effects consistent with the team’s level of knowledge. A failure mode can have multiple effects relating to internal and external customers. Effects may be shared by OEMs with suppliers and suppliers with sub-suppliers as part of design collaboration. The severity of failure effects is evaluated on a ten-point scale. Examples of failure effects on the end user:
- No discernible effect
- Poor appearance e.g.. unsightly close-out. color fade, cosmetic corrosion.
- Noise e.g.. misalignment/rub. fluid-borne noise. squeak/rattle, chirp, and squawk
- Unpleasant odor, rough feel, increased efforts
- Operation impaired. intermittent. unable to operate. electromagnetic incompatibility (EMC)
- External leak resulting in performance loss. erratic operation,unstable
- Unable to drive vehicle (walk home)
- Noncompliance with government regulations
- Loss of steering or braking
NOTE: In some cases. the team conducting the analysis may not know the end user effect, e.g.. catalogue parts. off- the-shelf products. Tier 3 components. When this information is not known, the effects should be defined in terms of the part function and specification. In these cases. the system integrator is responsible for ensuring the correct part for the application is selected. e.g.. auto,truck, marine, agriculture.
4.5 Failure Mode
A Failure Mode is defined as the manner in which an item could fail to meet or deliver the intended function. The Failure Modes are derived from the Functions. Failure Modes should be described in technical terms and not necessarily as symptoms noticeable by the customer. In preparing the DFMEA, assume that the design will be manufactured and assembled to the design intent. Exceptions can be made at the team‘s discretion where historical data indicates deficiencies exist in the manufacturing process. Examples of component-level failure modes include, but are not limited to:

Examples of system-level failure modes include, but are not limited to:
- Complete fluid loss
- Disengages too fast
- Does not disengage
- Does not transmit torque
- Does not hold full torque
- Inadequate structural support
- Loss of structural support
- No signal
- Intermittent signal
- Provides too much pressure/signal/voltage
- Provides insufficient pressurefsignal/voltage
- Unable to withstand load/temperature/vibration
4.6 Failure Cause
A Failure Cause is an indication of why the failure mode could occur. The consequence of a cause is the failure mode. Identify, to the extent possible, every potential cause for each failure mode. The consequences of not being robust to noise factors (found on a P-Diagram) may also be Failure Causes. The cause should be listed as concise and complete as possible so that remedial
efforts (controls and actions) can be aimed at appropriate causes. The Failure Causes can be derived from the Failure modes, of the next lower level function and requirement and the potential noise factors (e. g, from a Parameter Diagram). Types of potential failure causes could be, but are not limited to:
- Inadequate design for functional performance (e.g., incorrect material specified, incorrect geometry, incorrect part selected for application, incorrect surface finish specified, inadequate travel specification, improper friction material specified, insufficient lubrication capability, inadequate design life assumption, incorrect algorithm, improper maintenance instructions, etc.)
- System interactions (e.g., mechanical interfaces, fluid flow, heat sources. controller feedback, etc.)
- Changes overtime (e.g., yield, fatigue, material instability, creep, wear, corrosion, chemical oxidation, electromigration, over-stressing, etc.)
- Design inadequate for external environment (e.g., heat, cold, moisture, vibration, road debris, road salt, etc.)
- End user error or behavior (e.g., wrong gear used, wrong pedal used, excessive speeds, towing, wrong fuel type, service damage, etc.)
- Lack of robust design for manufacturing (e.g., part geometry allows part installation backwards or upside down, part lacks distinguishing design features, shipping container design
- causes parts to scratch or stick together, part handling causes damage, etc.)
- Software Issues (e.g., Undefined state, corrupted ccde/data)
4.7 Failure Analysis
Depending on whether the analysis is being done at the system, sub-system or component level, a failure can be viewed as a failure effect, failure mode, or failure cause. Failure Modes, Failure
Causes, and Failure Effects should correspond with the respective column in the FMEA form sheet. Figure below shows a cascade of design-related failure modes, causes, and effects from the vehicle level to the characteristic level. The focus element (Failure Mode), Causes, and Effects are different depending on the level of design integration. Consequently, a Failure Cause at the OEM becomes a Failure Mode at a next (Tier 1) level. However, Failure Effects at the vehicle level (as perceived by the end user) should be documented when known, but not assumed. Failure Networks may be created by the organization that owns multiple levels of the design. When multiple organizations are responsible for different levels of the design they are responsible to communicate failure effects to the next higher or next lower level as appropriate.

To link Failure Cause(s) to a Failure Mode. the question. should be “Why is the Failure Mode happening?“. To link Failure Effects to a Failure Mode, the question should be “What happens in the event of a Failure Mode?”

The failure structure can be created in the Failure Analysis section.
FAILURE ANALYSIS (STEP 4) | ||
1.Failure Effect(FE) to the Next Higher Level Element and/or End User | 2.Failure Mode(FM) of the focus element | 3 Failure Cause (FC) of the next lower element or characteristics |
Torque and rotating velocity of the window lifter motor too low | Angle deviation by commutation system intermittently connects the wrong coils (L1, L3 and L2 instead of L1, L2 and L3) | Brush card body bends in contact area of the carbon brush |
Following once again the header numbering (1, 2, 3) and color coding, by inspecting the items in the Function Analysis begin building the Failure Chain.
- Failure Effects (FE): The effect of failure associated with the “Next Higher Level Element and/or End User” in the Function Analysis.
- Failure Mode (FM): The mode (or type) of failure associated with the “Focus Element” in the Function Analysis.
- Failure Cause (FC): The cause of failure associated with the “Next Lower Element or Characteristic” in the Function Analysis..
4.8 Failure Analysis Documentation
The DFMEA Form Sheet can have multiple views once the Structure Analysis. Function Analysis and Failure Analysis are complete.

4.9 Collaboration between Customer and Supplier (Failure Effects)
The output of the Failure Analysis may be reviewed by customers and suppliers prior to the Risk Analysis step or after to the Risk Analysis step based on agreements with the customer and need
for sharing with the supplier.
4.10 Basis fer Risk Analysis
Complete definition of potential failures will lead to a complete Step 5 Risk Analysis because the rating of Severity. Occurrence, and Detection are based on the failure descriptions. The Risk Analysis may be incomplete if potential failures are too vague or missing.
Step 5: Risk Analysis
5.1 Purpose
The purpose of Design Risk Analysis is to estimate risk by evaluating Severity, Occurrence and Detection, and prioritize the need for actions. The main objectives of the Design Risk Analysis are:
- Assignment of existing and/or planned controls and rating of failures
- Assignment of Prevention Controls to the Failure Causes
- Assignment of Detection Controls to the Failure Causes and/or Failure Modes
- Rating of Severity. Occurrence and Detection for each failure chain
- Evaluation of Action Priority
- Collaboration between customer and supplier (Severity)
- Basis for the Optimization step
5.2 Design Controls
Current design controls are proven considerations that have been established for similar. previous designs. Design control documents are a basis for the robustness of the design. Prevention-type controls and detection-type controls are part of the current library of verification and validation methods. Prevention controls provide information or guidance that is used as an input to the design. Detection controls describe established verification and validation procedures that have been previously demonstrated to detect the failure, should it occur. Specific
references to design features that act to prevent a failure or line items in published test procedures will establish a credible link between the failure and the design control. These prevention and/or detection methods that are necessary, but not part of a current library of defined procedures should be written as actions in the DFMEA.
5.3 Current Prevention Controls (PC)
Current Prevention Controls describe how a potential cause which results in the Failure Mode is mitigated using existing and planned activities. They describe the basis for determining the occurrence rating. Prevention Controls relate back to the performance requirement. For items which have been designed out-of-context and are purchased as stock or catalog items from a supplier, the prevention control should document a specific reference to how the item fulfills the requirement. This may be a reference to a specification sheet in a catalog. Current Prevention controls need to be clearly and comprehensively described, with references cited. if necessary, this can be done by reference to an additional document. Listing a control such as “proven material” or “lessons learned” is not a clear enough indication. The DFMEA team should also consider margin of safety in design as a prevention control. Examples of Current Prevention Controls:
- EMC Directives adhered to, Directive 89/336/EEC
- System design according to simulation, tolerance calculation, and Procedure – analysis of concepts to establish design requirements
- Published design standard for a thread class
- Heat treat specification on drawing
- Sensor performance specifications.
- Mechanical redundancy (fail-safe)
- Design for test-ability
- In Design and Material standards (internal and external)
- Documentation (e.g., records of best practices. lessons learned, etc.) from similar designs
- Error-proofing (Poke-Yoke design i.e., part geometry prevents wrong orientation)
- Substantially identical to a design which was validated for a previous application, with documented performance history. (However, if there is a change to the duty cycle or operating conditions, then the carry-ever item requires re-validation in order for the detection control to be relevant.)
- Shielding or guards which mitigate potential mechanical wear, thermal exposure, or EMC.
- Conformance to best practices
After completion of the preventive actions the occurrence is verified by the Detection Control(s).
5.4 Current Detection Controls (DC)
Current Detection Controls detect the existence of a failure cause or the failure mode before the item is released for production. Current Detection Controls that are listed in the FMEA represent planned activities (or activities already completed), not potential activities which may never actually be conducted. Current Detection controls need to be clearly and comprehensively described. Listing a control such as “Test” or “Lab Test” is not a clear enough indication of a detection control. References to specific tests, test plans or procedures should be cited as applicable, to indicate that the FMEA team has determined that the test will actually detect the failure mode or cause, if it occurs (e.g.. Test No. 1234 Burst Pressure Test, Paragraph 6.1). Examples of Current Detection controls:
- Function check
- Burst test
- Environmental test
- Driving test a Endurance test
- Range of motion studies
- Hardware in-the-loop
- Software in-the-Ioop
- Design of experiments
- Voltage output lab measurements
All controls that lead to a detection of the failure cause. or the failure mode are entered into the “Current Detection Controls” column.

5.5 Confirmation of Currant Prevention and Detection Controls
The effectiveness of the current prevention and detection controls should be confirmed. This can be done during validation tear down reviews. Such confirmation can be documented within the DFMEA, or within other project documents, as appropriate, according to the team’s normal product development procedure. Additional action may be needed if the controls are proven not to be effective. The occurrence and detection evaluations should be reviewed when using FMEA entries from previous products, due to the possibility of different conditions for the new product.

5.6 Evaluations
Each failure mode, cause and effect relationship is assessed to estimate risk. There are rating criteria for the evaluation of risk:
- Severity (S): stands for the severity of the failure effect
- Occurrence (O): stands for the occurrence of the failure cause
- Detection (D): stands for the detection of the occurred failure cause and/or failure mode.
Evaluation numbers from 1 to 10 are used for S, O and D respectively, where 10 stands for the highest risk contribution.
NOTE: It is not appropriate to compare the ratings of one team’s FMEA with the ratings of another team’s FMEA, even if the product/process appear to be identical, since each team’s environment is unique and thus their respective individual ratings will be unique (i.e., the ratings are subjective).
5.7 Severity (S)
The Severity rating (S) is a measure associated with the most serious failure effect for a given failure mode of the function being evaluated. The rating is used to identify priorities relative to the scope of an individual FMEA and is determined without regard for occurrence or detection. Severity should be estimated using the criteria in the Severity Table. The table may be augmented to include product-specific examples. The FMEA project team should agree on an evaluation criteria and rating system. which is consistent even if modified for individual design analysis. The Severity evaluations of the failure effects should be transferred by the customer to. the supplier. as needed.
Product General Evaluation Criteria Severity (S) | |||
Potential Failure Effects rated according to the criteria below. | Blank until filled in by user | ||
S | Effect | Severity criteria | Corporate or Product Line Examples |
10 | Very High | Affects safe operation of the vehicle and/or other vehicles, the health of driver or passengers or road users or pedestrians. | |
9 | Noncompliance with regulations. | ||
8 | High | Loss of primary vehicle function necessary for normal driving during expected service life. | |
7 | Degradation of primary vehicle function necessary for normal driving during expected service life. | ||
6 | Moderate | Loss of secondary vehicle function. | |
5 | Degradation of secondary vehicle function. | ||
4 | Very objectionable appearance, sound, vibration, harshness, or haptics. | ||
3 | Low | Moderately objectionable appearance, sound, vibration, harshness, or haptics. | |
2 | Slightly objectionable appearance, sound, vibration, harshness, or haptics. | ||
1 | Very low | No discernible effect |
5.3 Occurrence (O)
The Occurrence rating (O) is a measure of the effectiveness of the prevention control, taking into account the rating criteria. Occurrence ratings should be estimated using the criteria in the Occurrence Table . The table may be augmented to include product-specific examples. The FMEA project team should agree on an evaluation criteria and rating system. which is consistent, even if modified for individual design analysis (e.g., passenger car, truck, motorcycle, etc). The Occurrence rating number is a relative rating within the scope of the FMEA and may not reflect the actual occurrence. The Occurrence rating describes the potential of the failure cause to occur in customer operation, according to the rating table, considering results of already completed detection controls. Expertise, data handbooks, warranty databases or other experiences in the field of comparable products, for example, can be consulted for the analysis of the evaluation numbers. When failure causes are rated for occurrence, it is done taking into account an estimation of the effectiveness of the current prevention control. The accuracy of this rating depends on how well the prevention control has been described. Questions such as the following may be helpful for a team when trying to determine the appropriate Occurrence rating:
- What is the service history and field experience with similar components, subsystems, or systems?
- Is the item a carryover product or similar to a previous level item?
- How significant are changes from a previous level item?
- Is the item completely new?
- What is the application or what are the environmental changes?
- Has an engineering analysis (e.g. reliability) been used to estimate the expected comparable occurrence rate for the application?
- Have prevention controls been put in place?
- Has the robustness of the product been proven during the product development process?
Occurrence Potential (O) for the Product | |||
Potential Failure Causes rated according to the criteria below. Consider Product Experience and Prevention Controls when determining the best Occurrence estimate (Qualitative rating). | Blank until filled in by user | ||
O | Prediction of Failure Cause Occurring | Occurrence criteria – DFMEA | Corporate or Product Line Examples |
10 | Extremely High | First application of new technology anywhere without operating experience and/or under uncontrolled operating conditions. No product verification and/or validation experience. Standards do not exist and best practices have not yet been determined. Prevention controls not able to predict field performance or do not exist. | |
9 | Very High | First use of design with technical innovations or materials within the company. New application or change in duty cycle / operating conditions. No product verification and/or validation experience. Prevention controls not targeted to identify performance to specific requirements. | |
8 | First use of design with technical innovations or materials on a new application. New application or change in duty cycle/operating conditions. No product verification and/or validation experience. Few existing standards and best practices, not directly applicable for this design. Prevention controls not a reliable indicator of field performance. | ||
7 | High | New design based on similar technology and materials. New application or change in duty cycle /operating conditions. No product verification and/or validation experience. Standards, best practices, and design rules apply to the baseline design, but not the innovations. Prevention controls provide limited indication of performance | |
6 | Similar to previous designs, using existing technology and materials. Similar application, with changes in duty cycle or operating conditions, Previous testing or field experience. Standards and design rules exist but are insufficient to ensure that the failure cause will not occur. Prevention controls provide some ability to prevent a failure cause. | ||
5 | Moderate | Detail changes to previous design, using proven technology and materials. Similar application, duty cycle or operating conditions. Previous testing or field experience, or new design with some test experience related to the failure. Design addresses lessons learned from previous designs. Best Practices re-evaluated for this design but have not yet been proven. Prevention controls capable of finding deficiencies in the product related to the failure cause and provide some indication of performance. | |
4 | Almost identical design with short-term field exposure. Similar application with minor change in duty cycle or operating conditions. Previous testing or field experience. Predecessor design and changes for new design conform to best- practices, standards, and specifications. Prevention controls capable of finding deficiencies in the product related to the failure cause and indicate likely design conformance. | ||
3 | Low | Detail changes to known design (same application, with minor change in duty cycle or operating conditions) and testing or field experience under comparable operating conditions or new design with successfully completed test procedure. Design expected to conform to Standards and Best Practices, considering Lessons Learned from previous designs. Prevention controls capable of finding deficiencies in the product related to the failure cause and predict conformance of production design. | |
2 | Very low | Almost identical mature design with long term field exposure. Same application, with comparable duty cycle and operating conditions. Testing or field experience under comparable operating conditions. Design expected to conform to standards and best practices, considering Lessons Learned from previous designs, with significant margin of confidence. Prevention controls capable of finding deficiencies in the product related to the failure cause and indicate confidence in design conformance. | |
1 | Extremely low | Failure eliminated through prevention control and failure cause is not possible by design | |
Product Experience: History of product usage within the company [Novelty of design, application or use case). Results of already completed detection controls provide experience with the design. | |||
Prevention Controls: Use of Best Practices for product design, Design Rules, Company Standards. Lessons Learned, industry Standards, Material Specifications, Government Regulations and effectiveness of prevention oriented analytical tools including Computer Aided Engineering, Math Modelling, Simulation Studies. Tolerance Stacks and Design/Safety Margins | |||
Note: 10,9,8,7 can drop based on product validation activities |
5.9 Detection (D)
The Detection rating (D) is an estimated measure of the effectiveness of the detection control to reliably demonstrate the failure cause or failure made before the item is released for production. The detection rating is the rating associated with the most effective detection control. Detection is a relative rating, within the scope of the individual FMEA and is determined without regard for severity or occurrence. Detection should be estimated using the criteria in Detection Table.The FMEA project team should agree on an evaluation criteria and rating system, which is consistent. even if modified for individual product analysis. The detection rating is initially a prediction of the effectiveness of any yet unproven control. The effectiveness can be verified and re-evaluated after the detection control is completed. However, the completion or cancellation of a detection control (such as a test) may also affect the estimation of occurrence. In determining this estimate, questions such as the following should be considered:
- Which test is most effective in detecting the Failure Cause or the Failure Mode?
- What is the usage Profile/ Duty Cycle required detecting the failure?
- What sample size is required to detect the failure?
- Is the test procedure proven for detecting this Cause / Failure Mode?
Detection Potential (D) for tile Validation of the Product Design | ||||
Detection Controls rated according to Detection Method Maturity and Opportunity for Detection. | Blank until filled in by user | |||
D | Ability to Detect | Detection Method Maturity | Opportunity for Detection | Corporate or Product Line Examples |
10 | Very Low | Test procedure yet to be developed. | Test method not defined | |
9 | Test method not designed specifically to detect failure mode or cause. | Pass-Fail, Test-to-Fail, Degradation Testing | ||
8 | Low | New test method; not proven. | Pass-Fail, Test-to-Fail, Degradation Testing | |
7 | Moderate | Proven test method for verification of functionality or validation of performance, quality, reliability and durability; planned timing is later in the product development cycle such that test failures may result in production delays for re-design andi/r re-tooling. | Pass-Fail | |
6 | Test-to-Fail | |||
5 | Degradation Testing | |||
4 | High | Proven test method for verification of functionality or validation of performance, quality, reliability and durability; planned timing is sufficient to modify production tools before release for production. | Pass-Fail | |
3 | Test-to-Fail | |||
2 | Degradation Testing | |||
1 | Very High | Prior testing confirmed that failure mode or cause cannot occur. or detection methods proven to always detect the failure mode or failure cause. |
5.10 Action Priority (AP)
Once the learn has completed the initial identification of Failure Modes, Failure Effects, Failure Causes and controls, including ratings for severity. occurrence, and detection. they must decide if
further efforts are needed to reduce the risk. Due to the inherent limitations on resources, time, technology, and other factors, they must choose how to best prioritize these efforts. The Action Priority (AP) method was created to give more emphasis on severity first, then occurrence, then detection. This logic follows the failure-prevention intent of FMEA. The AP table offers a suggested high-medium-low priority for action. Companies can use a single system to evaluate action priorities instead of multiple systems required from multiple customers. Risk Priority Numbers are the product of S x O x D and range from 1 to 1000. The RPN distribution can provide some information about the range of ratings. but RPN alone is not an adequate method to determine the need for more actions since RPN gives equal weight to S. O, and D. For this reason. RPN could result in similar risk numbers for very different combinations of S, O. and D leaving the team uncertain about how to prioritize. When using RPN it is recommended to use an additional method to prioritize like RPN results such as S x 0. The use of a Risk Priority Number(RPN) threshold is not a recommended practice for determining the need for actions. Risk matrices can represent combinations of S and 0, S and D, and 0 and D. These matrices provide a visual representation of the results of the analysis and can be used as an input to prioritization of actions based on company-established .Since the AP Table was designed to work with the Severity, Occurrence, and Detection tables , if the organization chooses to modify the S,0, D tables for specific products, processes, or projects, the AP table should also be carefully reviewed.
Note: Action Priority rating tables are the same for DFMEA and PFMEA, but different for FMEA-MSR.
Priority High (H): Highest priority for review and action. The team needs to either identify an appropriate action to improve Prevention and/or Detection Controls or justify and document why current controls are adequate.
Priority Medium (M): Medium priority for review and action. The team should identify appropriate actions to improve prevention and / or detection controls, or. at the discretion of the company, justify and document Why controls are adequate.
Priority Low (L): Low priority for review and action. The team could identify actions to improve
prevention or detection controls.
It is recommended that potential Severity 9-10 Failure Effects with Action Priority High and Medium, at a minimum, be reviewed by management including any recommended actions that were taken. This is not the prioritization of High, Medium, or Low risk, it is the prioritization of the actions to reduce risk.
Note: It may be helpful to include a statement such as “No further action is needed” in the Remarks field as appropriate.
Action Priority (AP) for DFMEA and PFMEA | |||||||
Action Priority is based on combinations of Severity, Occurrence, and Detection ratings in order to prioritize actions for risk reduction. | Blank until filled in by user | ||||||
Effect | S | Prediction of Failure Cause occurring | O | Ability to detect | D | Action Priority (AP) | Comments |
Product or Plant Effect Very high | 9-10 | Very high | 8-10 | Low – Very low | 7-10 | H | |
Moderate | 5-6 | H | |||||
High | 2-4 | H | |||||
Very high | 1 | H | |||||
High | 6-7 | Low – Very low | 7-10 | H | |||
Moderate | 5-6 | H | |||||
High | 2-4 | H | |||||
Very high | 1 | H | |||||
Moderate | 4-5 | Low – Very low | 7-10 | H | |||
Moderate | 5-6 | H | |||||
High | 2-4 | H | |||||
Very high | 1 | M | |||||
Low | 2-3 | Low – Very low | 7-10 | H | |||
Moderate | 5-6 | M | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Very low | 1 | Very high – Very low | 1-10 | L | |||
Product or Plant Effect high | 7-8 | Very high | 8-10 | Low – Very low | 7-10 | H | |
Moderate | 5-6 | H | |||||
High | 2-4 | H | |||||
Very high | 1 | H | |||||
High | 6-7 | Low – Very low | 7-10 | H | |||
Moderate | 5-6 | H | |||||
High | 2-4 | H | |||||
Very high | 1 | M | |||||
Moderate | 4-5 | Low – Very low | 7-10 | H | |||
Moderate | 5-6 | M | |||||
High | 2-4 | M | |||||
Very high | 1 | M | |||||
Low | 2-3 | Low – Very low | 7-10 | M | |||
Moderate | 5-6 | M | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Very low | 1 | Very high – Very low | 1-10 | L | |||
Product or Plant Effect Moderate | 4-6 | Very high | 8-10 | Low – Very low | 7-10 | H | |
Moderate | 5-6 | H | |||||
High | 2-4 | M | |||||
Very high | 1 | M | |||||
High | 6-7 | Low – Very low | 7-10 | M | |||
Moderate | 5-6 | M | |||||
High | 2-4 | M | |||||
Very high | 1 | L | |||||
Moderate | 4-5 | Low – Very low | 7-10 | M | |||
Moderate | 5-6 | L | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Low | 2-3 | Low – Very low | 7-10 | L | |||
Moderate | 5-6 | L | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Very low | 1 | Very high – Very low | 1-10 | L | |||
Product or Plant Effect low | 2-3 | Very high | 8-10 | Low – Very low | 7-10 | M | |
Moderate | 5-6 | M | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
High | 6-7 | Low – Very low | 7-10 | L | |||
Moderate | 5-6 | L | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Moderate | 4-5 | Low – Very low | 7-10 | L | |||
Moderate | 5-6 | L | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Low | 2-3 | Low – Very low | 7-10 | L | |||
Moderate | 5-6 | L | |||||
High | 2-4 | L | |||||
Very high | 1 | L | |||||
Very low | 1 | Very high – Very low | 1-10 | L | |||
No discemible effect | 1 | Very low- Very high | 1-10 | Very high – Very low | 1-10 | L |

5.11 Collaboration between Customer and Supplier (Severity)
The output of the Risk Analysis creates the mutual understanding of technical risk between customers and suppliers. Methods of collaboration range from verbal to formal reports. The amount of information shared is based on the needs of a project, company policy, contractual agreements, and so on. The information shared depends on the placement of the company in the supply chain.Some examples are listed below.
- The OEM may compare design functions, failure effects, and severity from a vehicle-level DFMEA with the Tier 1 supplier DFMEA.
- The Tier 1 supplier may compare design functions, failure effects, and severity from a subsystem DFMEA with the Tier 2 supplier who has design responsibility.
- The Tier 1 supplier communicates necessary information about product characteristics on product drawings and/or specifications, or other means. including designation of standard or special characteristics and severity. This information is used as an input to the Tier 2 supplier PFMEA as well as the Tier 1’s internal PFMEA. When the design team communicates the associated risk of making product characteristics out of specification the process team can build in the appropriate level of prevention and detection controls in manufacturing.
5.12 Basis for Optimization
The output of Steps 1, 2. 3, 4. and 5 of the 7-step FMEA process is used to determine if additional design or testing action is needed. The design reviews, customer reviews, management reviews, and cross-functional team meetings lead to Step 6 Optimization.
Step 6: Optimization
6.1 Purpose
The purpose of the Design Optimization is to determine actions to mitigate risk and assess the effectiveness of those actions. The main objectives of a Design Optimization are:
- I Identification of the actions necessary to reduce risks
- Assignment of responsibilities and deadlines for action implementation
- Implementation and documentation of actions taken including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken
- Collaboration between the FMEA team, management, customers, and suppliers regarding potential failures
- Basis for refinement of the product requirements and prevention and detection controls
The primary objective of Design Optimization is to develop actions that reduce risk and increase customer satisfaction by improving the design. In this step, the team reviews the results of the risk analysis and assigns actions to lower the likelihood of occurrence of the Failure Cause or increase the robustness of the Detection Control to detect the Failure Cause or Failure Mode. Actions may also be assigned which improve the design but do not necessarily lower the risk assessment rating. Actions represent a commitment to take a specific, measurable, and achievable action, not potential actions which may never be implemented. Actions are not intended to be used for activities that are already planned as these are documented in the Prevention or Detection Controls and are already considered in the initial risk analysis. If the team decides that no further actions are. necessary. “No further action is needed” is written in the Remarks field to show the risk analysis was completed. The DFMEA should be used to assess technical risks related to continuous improvement of the design. The optimization is most effective in the following order.
- Design modifications to eliminate or mitigate a Failure Effect (FE).
- Design modifications to reduce the Occurrence (O) of the Failure Cause (FC)
- Increase the Detection (D) ability for the Failure Cause (FC) or Failure Mode (FM).
- In the case of design modifications, all impacted design elements are evaluated again.
In the case of concept modifications. all steps of the FMEA are reviewed for the affected sections. This is necessary because the original analysis is no longer valid since it was based upon a different design concept.
6.2 Assignment of Responsibilities
Each action should have a responsible individual and a Target Completion Date (TCD) associated with it. The responsible person ensures the action status is updated. If the action is confirmed this person is also responsible for the action implementation. The Actual Completion Date for Preventive and Detection Actions is documented including the date the actions are implemented.
Target Completion Dates should be realistic (i.e., in accordance with the product development plan, prior to process validation, prior to start of production).
.6.3 Status of the Actions
Suggested levels for Status of Actions:
- Open: No action defined.
- Decision pending (optional): The action has been defined but has not yet been decided on. A decision paper is being created.
- Implementation pending (optional): The action has been decided on but not yet implemented.
- Completed:Completed actions have been implemented and their effectiveness has been demonstrated and documented. A final evaluation has been done.
- Not Implemented: Not implemented status is assigned when a decision is made not to implement an action. This may occur when risks related to practical and technical limitations are beyond current capabilities.
The FMEA is not considered “complete“ until the team assesses each item’s Action Priority and either accepts the level of risk or documents closure of all actions. If “No Action Taken,” then Action Priority is not reduced. and the risk of failure is carried forward into the product design. Actions are open loops that need to be closed in writing.
6.4 Assessment of Action Effectiveness
When an action has been completed, Occurrence and Detection values are reassessed, and a new Action Priority may be determined. The new action receives a preliminary Action Priority rating as a prediction of effectiveness. However, the status of the action remains “implementation pending” until the effectiveness has been tested. After the tests are finalized the preliminary rating has to be confirmed or adapted when indicated. The status of the action is then changed from “implementation pending” to “completed.” The reassessment should be based on the effectiveness of the Preventive and Detection Actions taken and the new values are based on the definitions in the Design FMEA Occurrence and Detection rating tables.
6.5 Continual Improvement
The DFMEA serves as an historical record for the design. Therefore, the original Severity, Occurrence, and Detection (S,O,D) numbers need to be visible or at a minimum available and accessible as part of version history. The completed analysis becomes a repository to capture the progression of design decisions and design refinements. However, originalS, O, D ratings may be modified for foundation. family or generic DFMEA’s because the information is used as a starting point for an application-specific analysis.

6.6 Collaboration between the FMEA team, Management, Customers and Suppliers regarding Potential Failures
Communication between the FllA team, management, customers and suppliers during the development of the technical risk analysis and/or when the DFMEA is initially complete brings people together to improve their understanding of product functions and failures. In this way, there is a transfer of knowledge that promotes risk reduction.
Step 7: Results Documentation
7.1 Purpose
The purpose of the Results Documentation step is to summarize and communicate the results of the FMEA activity. The main objectives of Design Results Documentation are:
- Communication of results and conclusions of the analysis
- Establishment of the content of the documentation
- Documentation of actions taken. including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken
- Communication of actions taken to reduce risks, including within the organization, and with customers and/or suppliers as appropriate
- Record of risk analysis and risk reduction to acceptable levels
7.2 FMEA Report
The report may be used for communication purposes within a company. or between companies. The report is not meant to replace reviews of the DFMEA details when requested by management, customers, or suppliers. It is meant to be a summary for the DFMEA team and others to confirm completion of each of the tasks and review the results of the analysis. It is important that the content of the documentation fulfills the requirements of the organization, the intended reader, and relevant stakeholders. Details may be agreed upon between the parties. In this way, it is also ensured that all details of the analysis and the intellectual property remain at the developing company. The layout of the document may be company specific. However, the report should indicate the technical risk of failure as a part of the development plan and project milestones. The content may include the following:
- A statement of final status compared to original goals established in Project Plan
- FMEA lntent— Purpose of this FMEA?
- FMEA Timing — FMEA due date?
- FMEA Team — List of participants?
- FMEA Task — Scope of this FMEA?
- FMEA Tool – How do we conduct the analysis Method used?
- A summary of the scope of the analysis and identify what is new.
- A summary of how the functions were developed.
- A summary of at least the high—risk failures as determined by the team and provide a copy of the specific S/O/D rating tables and method of action prioritization (e.g. Action Priority table).
- A summary of the actions taken and/or planned to address the high-risk failures including status of those actions.
- A plan and commitment of timing for ongoing FMEA improvement actions.
- Commitment and timing to close open actions.
- Commitment to review and revise the DFMEA during mass production to ensure the accuracy and completeness of the analysis as compared with the production design (e.g. revisions triggered from design changes, corrective actions, etc.. based on company procedures).
- Commitment to capture “things gone wrong” in foundation DFMEAs for the benefit of future analysis reuse, when applicable.










