AIAG & VDA FMEA For Monitoring And System Response (FMEA-MSR)

In a Supplemental FMEA for Monitoring and System Response, potential Failure Causes which might occur under customer operating conditions are analyzed with respect to their technical effects on the system, vehicle, people, and regulatory compliance. The method considers whether or not Failure Causes or Failure Modes are detected by the system, or Failure Effects are detected by the driver. Customer operation is to be understood as end-user operation or in-service operation and maintenance operations.
FMEA-MSR includes the following elements of risk:

  1. Severity of harm regulatory noncompliance. loss or degraded functionality. and unacceptable quality; represented by (S)
  2. Estimated frequency of a Failure Cause in the context of an operational situation; represented by (F)
  3. Technical possibilities to avoid or limit the Failure Effect via diagnostic detection and automated response, combined with human possibilities to avoid or limit the Failure Effect via sensory perception and physical reaction; represented by (M)

The combination of F and M is an estimate of the probability of occurrence of the Failure Effect due to the Fault (Failure Cause), and resulting malfunctioning behavior (Failure Mode).
NOTE: The overall probability of a Failure Effect to occur may be higher, because different Failure Causes may lead to the same Failure Effect.
FMEA-MSR adds value by assessing risk reduction as a result of monitoring and response. FMEA-MSR evaluates the current state of risk of failure and derives the necessity for additional monitoring by comparison with the conditions for acceptable residual risk.
The analysis can be part of a Design FMEA in which the aspects of Development are supplemented by aspects of Customer Operation. However. it is usually only applied when diagnostic detection is necessary to maintain safety or compliance. Detection in DFMEA Is not the same as Monitoring in Supplemental FMEA- MSR. In DFMEA Detection controls document the ability of testing to demonstrate the fulfillment of requirements in development and validation. For monitoring that is already part of the system design, validation is intended to demonstrate that diagnostic monitoring and system response works as intended. Conversely, Monitoring in FMEA-MSR assesses the effectiveness of fault detection performance in customer operation, assuming that specifications are fulfilled. The Monitoring rating also comprehends the safe performance and reliability of system reactions to monitored faults. It contributes to the assessment of the fulfillment of Safety Goals and may be used for deriving the Safety Concept. Supplemental FMEA-MSR addresses risks that in DFMEA would otherwise be assessed as High, by considering more factors which accurately reflect lower assessed risk according to the diagnostic functions of the vehicle operating system. These additional factors contribute to an improved depiction of risk of failure (including risk of harm, risk of noncompliance, and risk of not fulfilling specifications). FMEA-MSR contributes to the provision of evidence of the ability of the diagnostic, logical, and actuation mechanisms to achieve and maintain a safe or compliant state (in particular, appropriate failure mitigation ability within the maximum fault handling time interval and within the fault tolerant time interval). FMEA—MSR evaluates the current state of risk of failure under and user conditions (not just risk of harm to persons). The detection of faults/failures during customer operation can be used to avoid the original failure effect by switching to a degraded operational state (including disabling the vehicle), informing the driver and/or writing a diagnostic trouble code (DTC) into the control unit for service purposes. In terms of FMEA, the result of RELIABLE diagnostic detection and response is to eliminate (prevent) the original effect and replace it with a new, less severe effect. FMEA—MSR is useful in deciding whether the system design fulfills the performance requirements with respect to safety and compliance. The results may include items such as:

  • additional sensor(s) may be needed for monitoring purposes
  • redundancy in processing may be needed
  • plausibility checks may reveal sensor malfunctions

Step 1: Planning and Preparation

1.1 Purpose

The main objectives of Planning and- Preparation in FMEA-MSR are:

  • Project identification
  • Project plan (lnTent, Timing, Team, Tasks, Tools (5T)
  • Analysis boundaries: What is included and excluded from the analysis
  • Identification of baseline FMEA
  • Basis for the Structure Analysis step

1.2 FMEA-MSR Project Identification and Boundaries

FMEA—MSR project identification includes a clear understanding of what needs to be evaluated. This involves a decision-making process to define the FMEA—MSRs that are needed for a customer program. What to exclude can be just as important as what to include in the analysis. The following may assist the team. in defining FMEA-MSR projects, as applicable:

  • Hazard Analysis and Risk Assessment.
  • Legal Requirements
  • Technical Requirements
  • Customer wants/needs/expectation (external and internal customers)
  • Requirements specification
  • Diagrams (Block/Boundary/System)
  • Schematics, Drawings. and/or 3D Models.
  • Bill of Materials (BOM), Risk Assessment
  • Previous FMEA for similar products

Answers to these questions and others defined by the company help create the list of FMEA-MSR projects needed. The FMEA- MSR project list assures consistent direction, commitment and
focus. Below are some basic questions that help identify FMEA-MSR boundaries:

  1. After completing a DFMEA on an Electrical/Electronic/Prograrnmable Electronic System, are there effects that may be harmful to persons or involve regulatory noncompliance?
  2. Did the DFMEA indicate that all of the causes which lead to harm or noncompliance can be detected by direct sensing. and/or plausibility algorithms?
  3. Did the DFMEA indicate that the intended system response to any and all of the detected causes is to switch to a degraded operational state (including disabling the vehicle]. inform the driver and/or write a Diagnostic Trouble Code (DTC) into the control unit for service purposes?

FMEA for Monitoring and System Response may be used to examine systems which have integrated fault monitoring and response mechanisms during operation. Typically, these are more complex systems composed of sensors. actuators and logical processing units. The diagnosis and monitoring in such systems, may be achieved through hardware and, or software. Systems that may be considered in a Supplemental FMEA for Monitoring and System Response consist in general of at least a sensor, a control unit, and an actuator or a subset of them and are called mechatronic systems. Systems in-scope may also consist of mechanical hardware components (e.g., pneumatic and hydraulics).

Generic-block diagram of an Electrical / Electronic / Programmable Electronic system

The scope of a Supplemental FMEA for Monitoring and System Response may be established in consultation between customer and supplier. Applicable scoping criteria may include, but are not limited to:

  • System Safety relevance
  • ISO Standards, i.e., Safety Goals according to ISO 26262
  • Documentation requirements from legislative bodies e.g., UN/ECE Regulations, FMVSS/CMVSS, NHTSA, and On Board Diagnostic Requirements (OBD) Compliance.

1.3 FMEA-MSR Project Plan

A plan for the execution of the FMEA-MSR should be developed once the FMEA-MSR project is known. It is recommended that the 5T method (Intent, Timing, Team, Tasks. Tool) be used. The plan for the FMEA-MSR helps the company be proactive in starting the FMEA-MSR early. The FMEA-MSR activities (5-step process) should be incorporated into the overall design project plan.

Step 2 : Structure Analysis

2.1 Purpose

The main objectives of Structure Analysis in FMEA—MSR are:

  • Visualization of the analysis scope
  • Structure tree or equivalent: block diagram, boundary diagram, digital model, physical parts
  • Identification of design interfaces, interactions
  • Collaboration between customer and supplier engineering teams (interface responsibilities)
  • Basis for the Function Analysis step

Depending on the scope of analysis, the structure may consist of hardware elements and software elements. Complex structures may be split into several structures (work packages) or different layers of block diagrams and analyzed separately for organizational reasons or to ensure sufficient clarity. The scope of the FMEA—MSR is limited to the elements of the system for which the baseline DFMEA showed that there are causes of failure which can result in hazardous or non-compliant effects. The scope may be expanded to include signals received by the control unit. In order to visualize a system structure, two methods are commonly used:

  • Block (Boundary) Diagrams
  • Structure Trees

2.2 Structure Trees

In a Supplemental FMEA for Monitoring and System Response, the root element of a structure tree can be at vehicle level, i.e. for OEMs which analyze the overall system or at the system level, i.e. for suppliers which analyze a subsystem or component .

Example of a structure tree of a window lift system for investigating erroneous signals, monitoring, and system response

The sensor element and the control unit may also be part of one component (smart sensor). Diagnostics and monitoring in such systems may be realized by hardware and/or software elements.

Example of a structure tree of a smart sensor with an Internal sensing element and output to an interface

In case there is no sensor within the scope of analysis, an Interface Element is used to describe the data/current/voltage received by the ECU. One function of any ECU is to receive signals via a connector. These signals can be missing or erroneous. With no monitoring, you get erroneous output. In case there is no actuator within the scope of analysis, an Interface Element is used to describe the data/current/voltage sent by the ECU. Another function of any ECU is to send signals. i.e. via a connector. These signals can also be missing or erroneous. It can also be “no output” or “failure information.” The causes of erroneous signals may be within a component which is outside the scope of responsibility of the engineer or organization. These erroneous signals may have an effect on the performance of a component which is within the scope of responsibility of the engineer or organization. It is therefore necessary to include such causes in the FMEA-MSR analysis.
NOTE: Ensure that the structure is consistent with the Safety Concept (as applicable).

STRUCTURE ANALYSIS (STEP 2)
1. Next Higher Level2 Focus Element3. Next lower level or characteristic Type
Window Lift SystemECU Window LifterConnector ECU Window Lifter
Example of Structure Analysis in the FMEA-MSR Form Sheet

Step 3 : Function Analysis

The main objectives of Function Analysis in FMEA-MSR are:

  • Visualization of functions and relationships between functions in Function tree/ function net, or equivalent parameter diagram (P—diagram)
  • Cascade of customer (external and internal) functions with associated requirements
  • Association of requirements or characteristics to functions
  • Collaboration between engineering teams (systems, safety, and components)
  • Basis for the Failure Analysis step

In a Supplemental FMEA for Monitoring and System Response, monitoring for failure detection and failure responses are considered as functions. Hardware and software functions may include monitoring of system states. Functions for monitoring and detection of faults/failures may consist of, for example: out of range detection, cyclic redundancy checks, plausibility checks and sequence counter checks. Functions for failure reactions may consist of, for example, provision of default values, switching to a limp home mode, switching off the corresponding function and/or display of a warning. Such functions are modeled for these structural elements that are carriers of these functions, i.e., control units or components with computational abilities like smart sensors. Additionally, sensor signals can be considered which are received by control units. Therefore, functions of signals may be described as well. Finally, functions of actuators can be added, which describe the way the actuator or vehicle reacts on demand. Performance requirements are assumed to be the maintenance of a safe or compliant state. Fulfillment of requirements is assessed through the risk assessment. in case sensors and/or actuators are not within the scope of analysis, functions are assigned to the corresponding interface- elernents (consistent with the Safety Concept-as applicable).

Example of a Structure Tree with functions
FUNCTION ANALYSIS (STEP 3)
1. Next Higher Level Function and Requirement2 Focus Element Function and Requirement3. Next lower level Function and Requirement  or characteristic Type
Provide anti-pinch protection for comfort closing modeProvide signal to stop and reverse window lifter motor in case of pinch situationTransmit signal from Hall effect sensor to ECU
Example of Function Analysis in FMEA-MSR Form Sheet.

Step 4: Failure Analysis

4.1 Purpose

The purpose of Failure Analysis in FMEA-MSR is to-describe the chain of events which lead up to the end effect, in the context of a relevant scenario. The main objectives of Failure Analysis in FMEA-MSR are:

  • Establishment of the failure chain
  • Potential Failure Cause, Monitoring, System Response,Reduced Failure Effect.
  • Identification of product Failure Causes using a parameter diagram or failure network
  • Collaboration between customer and supplier (Failure Effects)
  • Basis for the documentation of failures in the FMEA form sheet and the Risk Analysis step

4.2 Failure Scenario

A Failure Scenario is comprised of a description of relevant operating conditions in which a fault results in malfunctioning behavior and possible sequences of events (system states) that lead to an and system state (Failure Effect). It starts from defined Failure Causes and leads to the Failure Effects.

Theoretical failure chain model DFMEA and FMEA-MSR

The focus of the analysis is a component with diagnostic capabilities, e.g., an ECU. If the component is not capable of detecting the fault/failure, the Failure Mode will occur which leads to the end effect with a corresponding degree of Severity. However, if the component can detect the failure, this leads to a system response with a Failure Effect with a lower Severity compared to the original Failure Effect. Details are described in the following scenarios (1) to (3).

Failure Scenario (1) – Non-Hazardous

Failure Scenario (1) describes the malfunctioning behavior from the occurrence of the fault to the Failure Effect, which in this example is not hazardous but may reach a non-compliant end system state.

Failure Scenario (2) – Hazardous

Failure Scenario (2) describes the malfunctioning behavior from the occurrence of the fault to the Failure Effect, which in this example leads to a hazardous event. As an aspect of the Failure Scenario, it is necessary to estimate the magnitude of the Fault Handling Time Interval (time between the occurrence of the fault, and the occurrence of the hazard/non-compliant Failure Effect). The Fault Handling Time Interval is the maximum time span of malfunctioning behavior before a hazardous event occurs, if the safety mechanisms are not activated.

Failure Scenario (3) – mitigated (Effect)

Failure Scenario (3) describes the malfunctioning behavior from the occurrence of the fault to the mitigated Failure Effect, which in this example leads to a loss or degradation of a function instead of the hazardous event.

4.3 Failure Cause

The description of the Failure Cause is the starting point of the Failure Analysis in a Supplemental FMEA for Monitoring and System Response. The Failure Cause is assumed to have occurred and is not the true Failure Cause (root cause). Typical Failure Causes are electrical/electronic faults (E/E faults). Root causes may be insufficient robustness when exposed to various factors such as the external environment, vehicle dynamics, wear, service, stress cycling, data bus overloading, and erroneous signal states etc. Failure Causes can be derived from the DFMEA, catalogues for failures of E/E components, and network communication data descriptions.

NOTE: In FMEA-MSR, diagnostic monitoring is assumed to function as intended. (However, it may not be effective.) Therefore, Failure Causes of diagnostics are not part of FMEA—MSR but can be added to the DFMEA section of the form sheet. These include Failed to detect fault; Falsely detected fault (nuisance); Unreliable fault response (variation in response capability).

Teams may decide not to include failures of diagnostic monitoring in DFMEA because Occurrence ratings are most often very low (including “latent faults” Ref. ISO 26262). Therefore. this analysis may be of limited value. However, the correct implementation of diagnostic monitoring should be part of the test protocol. Prevention Controls of diagnostics in a DFMEA describe how reliable a mechanism is estimated to detect the Failure Cause and reacts on time with respect to the performance requirements. Detection Controls of diagnostics in a DFMEA would relate back to development tests which verify the correct implementation and the effectiveness of the monitoring mechanism.

4.4 Failure Mode

A Failure Mode is the consequence of the fault (Failure Cause). In FMEA-MSR two possibilities are considered:

  1. In case of failure scenarios (1) and (2) the fault is not detected or the system reaction is too late. Therefore, the Failure Mode in FMEA-MSR is the same as in DFMEA.
  2. Different is failure scenario (3), where the fault is detected and the system response leads to a mitigated Failure Effect. In this case a description for the diagnostic monitoring and system response is added to the analysis. Because the failure chain in this specific possibility consists of a fault/failure and a description of an intended behavior, this is called a hybrid failure chain or hybrid failure network

4.5 Failure Effect

A Failure Effect is defined as the consequence of a Failure Mode. Failure Effects in FMEA-MSR are either a malfunctioning behavior of the system or an intended behavior after detection of a Failure Cause. The end effect may be a “hazard” or “non-compliant state” or, in case of detection and timely system response, a “safe state” or”compliant state” with loss or degradation of a function. The severity of Failure Effects is evaluated on a ten-point scale

FAILURE ANALYSIS (STEP 4)
Failure Effect (FE) to the  Next Higher Level Element and/ or End User2 Failure Mode (FM) of the Focus Element3. Failure Cause (FC) of theNext lower level Element  or characteristic
No anti-pinch protection in comfort closing mode. {Hand or neck may be pinched between window glass and frame]No signal to stop and reverse window lifter motor in case of pinch situationSignal of Hall effect sensor is not transmitted to ECU due to  poor connection of Hail effect Sensor
Example of Failure Analysis In FMEA-HSR Form Sheet.

Step 5: Risk Analysis

5.1 Purpose

The purpose of Risk Analysis in FMEA-MSR is to estimate risk of failure by evaluating Severity, Frequency, and Monitoring. and prioritize the need for actions to reduce risk. The main objectives of the FMEA-MSR Risk Analysis are:

  • Assignment of existing and/or planned controls and rating of failures
  • Assignment of Prevention Controls to the Failure Causes
  • Assignment of Detection Controls to the Failure Causes and/or Failure Modes
  • Rating of Severity, Frequency and Monitoring for each failure chain.
  • Evaluation of Action Priority
  • Collaboration between customer and supplier (Severity).
  • Basis for the Optimization step.

5.2 Evaluations

Each Failure Mode, Cause and Effect relationship (failure chain or hybrid network) is assessed by the following three criteria:

  • Severity (S): represents the Severity of the Failure Effect
  • Frequency (F): represents the Frequency of Occurrence of the Cause in a given operational situation, during the intended service life of the vehicle
  • Monitoring (M): represents the Detection potential of the Diagnostic Monitoring functions (detection of Failure Cause, Failure Mode and/or Failure Effect)

Evaluation numbers from 1 to 10 are used for S, F, and M respectively. where 10 stands for the highest risk contribution. By examining these ratings individually and in combinations of the three factors the need for risk-reducing actions may be prioritized.

5.3 Severity (S)

The Severity rating (S) is a measure associated with the most serious Failure Effect for a given Failure Mode of the function being evaluated and is identical for DFMEA and FMEA-MSR. Severity should be estimated using the criteria in the Severity Table . The table may be augmented to include product- specific examples. The FMEA project team should agree on an evaluation criteria and rating system, which is consistent even if modified for individual design analysis. The Severity evaluations of the Failure Effects should be transferred by the customer to the supplier, as needed.

Product General Evaluation Criteria Severity (S)
Potential Failure Effects rated according to the criteria below.Blank until filled in by user
SEffectSeverity criteriaCorporate or Product Line  Examples
10Very HighAffects safe operation of the vehicle and/or other vehicles, the health of driver or passengers or road users or pedestrians. 
9Noncompliance with regulations. 
8HighLoss of primary vehicle function necessary for normal driving during expected service life. 
7Degradation of primary vehicle function necessary for normal driving during expected service life. 
6ModerateLoss of secondary vehicle function. 
5Degradation of secondary vehicle function. 
4Very objectionable appearance, sound, vibration, harshness, or haptics. 
3LowModerately objectionable appearance, sound, vibration, harshness, or haptics. 
2Slightly objectionable appearance, sound, vibration, harshness, or haptics. 
1Very lowNo discernible effect 
Supplemental FMEA-MSR SEVERITY (S)

5.4 Rationale for Frequency Rating

In a Supplemental FMEA for Monitoring and System Response, the likelihood of a failure to occur in the field under customer operating conditions during service life is relevant. Analysis of end user operation requires assumptions that the manufacturing process is adequately controlled in order to assess the sufficiency of the design. Examples on which a rationale may be. based on:

  • Evaluation based on the results of Design FMEAs
  • Evaluation based on the results of Process FMEAs
  • Field data of returns and rejected parts
  • Customer complaints
  • Customer complaints
  • Warranty databases
  • Data handbooks

The rationale is documented in the column “Rationale for Frequency Rating“ of the FMEA-MSR form sheet.

5.5 Frequency (F)

The Frequency rating (F) is a- measure of the likelihood of occurrence of the cause in relevant operating situations during the intended service life of the vehicle or the system using the criteria in Table below. If the Failure Cause does not always lead to the associated Failure Effect, the rating may be adapted. taking into account the probability of exposure to the relevant operating condition. In such cases the operational situation and the rationale are to be stated in the column “Rationale for Frequency Rating.”
Example: From field data it is known how often a control unit is defective in ppm/year. This may lead to F=3. The system under investigation is a parking system which is used only a very limited
time in comparison to the overall operating time. So harm to persons is only possible when the defect occurs during the parking maneuver. Therefore, Frequency may be lowered to F=2.

Frequency Potential (F) for the Product
Frequency criteria (F) for the estimated occurrence of the Failure Cause in relevant operating situations during the intended service life of the vehicleBlank until filled in by user
FEstimated FrequencyFrequency criteria — FMEA-MSRCorporate or Product Line  Examples
10Extremely High or cannot be determinedFrequency of occurrence of the Failure Cause is unknown or known to be unacceptably high during the intended  service life of the vehicle 
9HighFailure Cause is likely to occur during the intended service  life of the vehicle 
8Failure Cause may occur often in the field during the  intended service life of the vehicle 
7MediumFailure Cause may occur frequently in the field during the intended service life of the vehicle 
6Failure Cause may occur somewhat frequently in the field during the intended service life of the vehicle 
5Failure Cause may occur Occasionally  in the field during the intended service life of the vehicle 
4lowFailure Cause is predicted to occur rarely in the field during the intended service life of the vehicle. At least ten occurrences in the field are predicted. 
3Very LowFailure Cause is predicted to occur in isolated cases in the field during the intended service life of the vehicle. At least one occurrence in the field is predicted. 
2Extreme LowFailure Cause is predicted not to occur in the field during the intended service life of the vehicle based on prevention and detection controls and field experience with similar parts. Isolated cases cannot be ruled out. No proof it will not happen. 
1Cannot occurFailure Cause cannot occur during the intended service life of the vehicle or is virtually eliminated. Evidence that Failure Cause cannot occur. Rationale is documented. 
Percentage of relevant operating condition in comparison to overall operating timeValue by which F may be lowered
< 10%1
< 1%2
supplemental FMEA-MSR FREQUENCY (F)

Note:

  1. Probability increases as number of vehicles are increased
  2. Reference value for estimation is one. million vehicles in a the field

5.6 Current Monitoring Controls

All controls that are planned or already implemented and lead to a detection of the Failure Cause, the Failure Mode or the Failure Effect by the system or by the driver are entered into the “Current Monitoring Controls” column. In addition, the fault reaction after-detection should be described. i.e. provision of default values, (if not already sufficiently described by the Failure Effect). Monitoring evaluates the potential that the Failure Cause, the Failure Mode or the Failure Effect can be detected early enough, so that the initial Failure Effect can be mitigated before a hazard occurs or a non-compliant state is reached. The result is an end state effect with a lower severity.

5.7 Monitoring (M)

The Monitoring rating (M) is a measure of the ability of detecting a fault/failure during customer operation and applying the fault reaction in order to maintain a safe or compliant state. The Monitoring Rating relates to the combined ability of all sensors, logic, and human sensory perception to detect the fault/failure; and react by modifying the vehicle behavior by means of mechanical actuation and physical reaction (controllability). In order to maintain a safe or compliant state of operation, the sequence of fault detection and reaction need to take place before the hazardous or non-compliant effect occurs. The resulting rating describes the ability to maintain a safe or compliant state of operation. Monitoring is a relative rating within the scope of the individual FMEA and is determined without regard. for severity or frequency. Monitoring should be estimated using the criteria in Table below. This table may be augmented with examples of common monitoring. The FMEA project team should agree on an evaluation criteria and rating system which is consistent, even if modified for individual product analysis. The assumption is that Monitoring is implemented and tested as designed. The effectiveness of Monitoring depends on the design of the sensor hardware. sensor redundancy, and diagnostic algorithms that are implemented. Plausibility metrics alone are not considered to be effective. Implementation of monitoring and the verification of effectiveness should be part of the development process and therefore may be analyzed in the corresponding DFMEA of the product. The effectiveness of diagnostic monitoring and response, the fault monitoring response time. and the Fault Tolerant Time Interval need to be determined prior to rating. Determination of the effectiveness of diagnostic monitoring is addressed in detail in ISO 26262-5:2018 Annex D.
In practice. three different monitoring/response cases may be distinguished:

If there is no monitoring control. or if monitoring and response do not occur within the Fault Handling Time Interval, then Monitoring should be rated as Not Effective (M=10).

The original Failure Effect is virtually eliminated. Only the mitigated Failure Effect remains relevant for the risk estimation of the product or system in this instance only. the mitigated FE is
relevant for the Action Priority rating, not the original FE. The assignment of Monitoring Ratings to Failure Causes and their corresponding Monitoring Controls can vary depending on:

  • Variations in the Failure Cause or Failure Mode
  • Variations in the hardware implemented for diagnostic monitoring
  • The execution timing of the safety mechanism. i.e., failure is detected during “power up” only
  • Variations in system response
  • Variations in human perception and reaction
  • Knowledge of implementation and effectiveness from other projects (newness)

Depending on these Variations or execution timing, Monitoring Controls may not be considered to be RELIABLE in the sense of M=1.

The original Failure Effect occurs less often. Most of the failures are detected and the system response leads to a mitigated Failure Effect. The reduced risk is represented by the Monitoring rating. The most serious Failure Effect remains S=10.

Supplemental FMEA for Monitoring and System Response (M)
Monitoring Criteria (M) for Failure Causes, Failure Modes and Failure Effects by Monitoring during Customer Operation. Use the rating number that corresponds with the least effective of either criteria for Monitoring or System ResponseBlank until filled in by user
MEffectiveness of Monitoring Control and system responseDiagnostic Monitoring/ Sensory Perception criteriaSystem Response /Human Reaction CriteriaCorporate or Product Line  Examples
10Not effectiveThe fault/failure cannot be detected at all or not during the Fault Handling Time Interval; by the system, the driver, passenger, or service technician.No response during the Fault Handling Time Interval 
9Very lowThe fault/failure can almost never be detected in relevant operating conditions. Monitoring control with low effectiveness, high variance, or high uncertainty. Minimal diagnostic coverage.The reaction to the fault/failure by the system or the driver may not reliably occur during the Fault Handling Time Interval 
8LowThe fault/failure can be detected in very few relevant operating conditions. Monitoring control with low effectiveness, high variance, or high uncertainty. Diagnostic coverage estimated <60%.The reaction to the fault/failure by the system or the driver may not always occur during the Fault Handling Time Interval 
7Moderately LowLow probability of detecting the fault/failure during the Fault Handling Time Interval by the system or the driver. Monitoring control with low effectiveness, high variance, or high uncertainty. Diagnostic coverage estimated >60%.Low probability of reacting to the detected fault/failure during the Fault Handling Time Interval by the system or the driver 
6ModerateThe fault/failure will be automatically detected by the system or the driver only during power-up, with medium variance in detection time. Diagnostic coverage estimated >90%.The automated system or the driver will be able to react to the detected fault/failure in many operating conditions.     
5The fault/failure will be automatically detected by the system during the Fault Handling Time Interval, with medium variance in detection time or detected by the driver in very many operating conditions. Diagnostic coverage estimated between 90% – 97%.The automated system or the driver will be able to react to the detected fault/failure during the Fault Handling Time Interval in a very many operating conditions 
4Moderately HighThe fault/failure will be automatically detected by the system during the Fault Handling Time Interval, with medium variance in detection time or detected by the driver in most operating conditions. Diagnostic coverage estimated > 97%.The automated system or the driver will be able to react to the detected fault/failure during the Fault Handling Time Interval in a most operating conditions 
3HighThe fault/failure will be automatically detected by the system during the Fault Handling Time Interval, with very low variance in detection time and with high Probability. Diagnostic coverage estimated > 99%.The system will automatically react to the detected fault/failure during the Fault Handling Time Interval in a most operating conditions with very low variance in system response time, and with a high probability. 
2Very HighThe fault/failure will be automatically detected by the system during the Fault Handling Time Interval, with very low variance in detection time and with very high Probability. Diagnostic coverage estimated > 99.9%.The system will automatically react to the detected fault/failure during the Fault Handling Time Interval in a most operating conditions with very low variance in system response time, and with a very high probability. 
1Reliable and acceptable for elimination of original failure effectThe fault/failure will always be automatically detected by the system. Diagnostic coverage estimated to be significantly greater then 99.9%.The system will always automatically react to the detected fault/failure during the Fault Handling Time Interval 
Supplemental F’MEA-MSR- MONITORING (M)

5.8 Action Priority (AP) for FMEA-MSR

The Action Priority is a methodology which allows for the prioritization of the need for action, considering Severity, Frequency, and Monitoring (SFM). This is done by the assignment of SFM ratings which provide a basis for the estimation of risk.

  • Priority High (H): Highest priority for review and action. The team needs to either identify an appropriate action to lower frequency and/or to improve monitoring controls or justify and document why current controls are adequate.
  • Priority Medium (M): Medium priority for review and action. The team should identify appropriate actions to lower frequency and/or to improve monitoring controls, or, at the discretion of the company, justify and document why controls are adequate.
  • Priority Low (L): Low priority for review and action. The team could identify actions to lower
  • frequency and/or to improve monitoring controls.

It is recommended that potential Severity 9-10 failure effects with Action Priority High and Medium, at a minimum, be reviewed by management including any recommended actions that were taken. This is not the prioritization of High, Medium, or Low risk, It. is the prioritization of the need for actions to reduce risk.
Note: it may be helpful to include a statement such as “No further action is needed” in the Remarks field as appropriate.

Auction Priority is based on combinations of Severity, Frequency, and monitoring ratings- in order to prioritize actions for risk reduction.
EffectSPrediction of Failure Cause occurring during service life of vehicleFEffectiveness of MonitoringMAction Priority (AP)
Product or Plant Effect Very high10Medium-Extremely High5-10Reliable – Not effective1-10H
Low4Moderately high – Not effective4-10H
Very high – High2-3H
Reliable1M
Very low3Moderately high – Not effective4-10H
Very high – High2-3M
Reliable1L
Extremely low2Moderately high – Not effective4-10M
Reliable- high1-3L
Cannot occur1Reliable – Not effective1-10L
Product Effect high9Low – Extremely high4-10Reliable – Not effective1-10H
Extremely low –Very low2-3Very High – Not effective2-10H
Reliable- high1-3L
Cannot occur1Reliable – Not effective1-10L
Product Effect Moderately high7-8Medium-Extremely High6-10Reliable – Not effective1-10H
Medium5Moderately high – Not effective5-10H
Reliable- Moderately high1-4M
low4Moderately low – Not effective7-10H
Moderately High- Moderate4-6M
Reliable -High1-3L
Very low3Very low – Not effective9-10H
Moderately low-Low7-8M
Reliable -Moderate1-6L
Extreme Low2Moderately low – Not effective7-10M
Reliable -Moderate1-6L
Cannot occur1Reliable – Not effective1-10L
Product or  Effect Moderately low4-6High-Extremely High7-10Reliable – Not effective1-10H
Medium5-6Moderate – Not effective6-10H
Reliable –Moderately High1-5M
Extremely low- Low2-4Very low – Not effective9-10M
Moderately High- Moderate7-8M
Reliable -Moderate1-6L
Cannot occur1Reliable – Not effective1-10L
Product  Effect low2-3High-Extremely High7-10Reliable – Not effective1-10H
Medium  5-6  Moderately low – Not effective7-10M
Reliable -Moderate1-6L
Extremely low- Low2-4Reliable -Moderate1-6L
Cannot occur1Reliable – Not effective1-10L
Product effect very low1Very low- Very high1-10Reliable – Not effective1-10L
ACTION PRIORITY FOR FMEA-MSR
  • NOTE 1: If M=1, the Severity rating of the Failure Effect after Monitoring and System Response is to be used for determining MSR Action Priority. If M is not equal to 1, then the Severity Rating of the original Failure Effect is to be used for determining MSR Action Priority.
  • NOTE 2: When FMEA—MSR is used. and M=1, then DFMEA Action Prioritization replaces the severity rating of the original Failure Effect with the Severity rating of the mitigated Failure Effect.
Example of FMEA-MSR Risk Analysis – Evaluation of Current Risk Form Sheet

Step 6: Optimization

6.1 Purpose

The primary objective of Optimization in FMEA-MSR is to develop actions that reduce risk and improve safety. In this step, the team reviews the results of the risk analysis and evaluates action priorities. The main objectives of FMEA-MSR Optimization are:

  • Identification of the actions necessary to reduce risks
  • Assignment of responsibilities and target completion dates for action implementation
  • Implementation and documentation of actions taken including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken.
  • Collaboration between the FMEA team, management, customers, and suppliers regarding potential failures
  • Basis for refinement of the product requirements and prevention/detection controls

High and medium action priorities may indicate a need for technical improvement. Improvements may be achieved by introducing more reliable components which reduce the occurrence potential of the Failure Cause in the field or introduce additional monitoring which improve the detection capabilities of the system. Introduction of monitoring is similar to design change. Frequency of the Failure Cause is not changed. It may also be possible to eliminate the Failure Effect by introducing redundancy. If the team decides that no further actions are necessary. “No further action is needed” is written in the Remarks field to show the risk analysis was completed. The optimization is most effective in the following order:

  • Component design modifications in order to reduce the Frequency (F) of the Failure Cause (FC)
  • Increase the Monitoring (M) ability for the Failure Cause (FC) or Failure Mode (FM).

In the case of design modifications. all impacted design elements are evaluated again.
In the case of concept modifications. all steps of the FMEA are reviewed for the affected sections. This is necessary because, the original analysis is no longer valid since it was based upon a different design concept.

6.2 Assignment of Responsibilities

Each action should have a responsible individual and a Target Completion Date (TCD) associated with it. The responsible person ensures’the action status is updated, if the action is confirmed this person is also responsible for the action implementation. The Actual Completion Date is documented including the date the actions are implemented. Target Completion Dates should be, realistic (i.e., in accordance with the product development plan, Prior to process validation,
prior to start of production).

6.3 Status of the Actions

Suggested levels for Status of: Actions:

  • Open: No Action defined.
  • Decision pending (optional): The action has been defined but has not yet decided on. A decision paper is being created.
  • Implementation pending (optional): The action has been decided on but not yet implemented.
  • Completed: Completed actions have been implemented and their effectiveness has been demonstrated and documented. A final evaluation has been done.
  • Not Implemented: Not Implemented status .is assigned when a decision is made not to implement an action. This may occur when risks related to practical and technical limitations are beyond current capabilities

The FMEA is not considered “complete” until the team assesses each item’s Action Priority and either accepts the level of risk or documents closure of all actions. Closure of all actions should be documented before the FMEA is released at Start of Production (SOP). If “No Action Taken”, then Action Priority is not reduced and the risk of failure is carried forward into the product design.

6.4 Assessment of Action Effectiveness

When an action has been completed, Frequency, and Monitoring values are reassessed, and a new Action Priority may be determined. The new action receives a preliminary Action Priority rating as a prediction of effectiveness. However. the status of the action remains “implementation pending” until the effectiveness has been tested. After the tests are finalized the preliminary rating has to be confirmed or adapted, when indicated. The status of the action is then changed from “implementation pending” to “completed.” The reassessment should be based on the effectiveness of the MSR Preventive and Diagnostic Monitoring Actions taken and the new values are based on the definitions in the FMEA-MSR Frequency and Monitoring rating tables.

6.5 Continuous Improvement

FMEA—MSR serves as an historical record for the design. Therefore. the original Severity. Frequency, and Monitoring (S, F, M) numbers are not modified once actions have been taken. The completed analysis becomes a repository to capture the progression of design decisions and design refinements. However, original S, F, M ratings may be modified for basis, family or generic DFMEAs because the information is used as a starting point for an application-specific analysis.

Example of FMEA-MSR Optimization with new Risk Evaluation Form Sheet

Step 7: Results Documentation

7.1 Purpose

The purpose of the results documentation step is to summarize. and communicate the results of the Failure Mode and Effects Analysis activity. The main objectives of FMEA – MSR Results Documentation are:

  • Communication of results and conclusions of the analysis.
  • Establishment of the content of the documentation
  • Documentation of actions taken including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken
  • Communication of actions taken to reduce risks.- including within the organization. and with customers and/or suppliers as appropriate
  • Record of risk analysis and reduction to acceptable levels

7.2 FMEA Report

The scope and results of an FMEA should be summarized in a report. The report can be used for communication purposes within a company, or between companies. The report is not meant to replace reviews of the FMEA-MSR details when requested by management, customers, or suppliers. It is meant to be a summary for the FMEA-MSR team and others to confirm completion of each of the tasks and review the results of the analysis. It is important that the content of the documentation fulfills the requirements of the organization, the intended reader. and relevant stakeholders. Details may be agreed upon between the parties. In this way, it is also ensured that all details of the analysis and the intellectual property remain at the developing company. The layout of the document may be company specific. However, the report should indicate the technical risk of failure as a part of the development plan and project milestones. The content may include the following:

  1. A statement of final status compared to original goals established in Project Plan
    • FMEA Intent – Purpose of this FMEA‘?
    • FMEA Timing – FMEA due date?
    • FMEA Team – List of participants?
    • FMEA Task – Scope of this FMEA?
    • FMEA Tool – How do we conduct the analysis Method used?
  2. A summary of- the scope of the analysis and identify what is new.
  3. A summary of how the functions were developed.
  4. A summary of at least the high-risk failures as determined by the team and provide a copy of the specific S/F/M rating tables and method of action prioritization (e.g. Action Priority table).
  5. . A summary of the actions taken and/or planned to address the high-risk failures including status of those actions.
  6. . A plan and commitment of timing for ongoing FMEA improvement actions.
    • Commitment and timing to close open actions.
    • Commitment to review and revise the FMEA-MSR during mass production to ensure the accuracy and completeness of the analysis as compared with the original production design (e.g. revisions triggered from design changes, corrective actions, etc., based on company procedures).
    • Commitment to capture “things gone wrong” in foundation FMEA-MSRs for the benefit of future analysis reuse, when applicable.
Standard FMEA-MSR Farm Sheet
FMEA-MSR Software View

AIAG & VDA Process Failure Mode and Effect Analysis

Step 1 : Planning and Preparation

1.1 Purpose

The purpose of the Process Planning and Preparation Step is to describe what product/ processes are to be included or excluded for review in the PFMEA project. The process takes into account that all processes within the facility can be analyzed or reanalyzed using PFMEA. This process allows an organization to review all processes at a high level and to make a final determination for which processes will be analyzed. The overall advantage of Preparation is to focus resources on processes with the highest priority. The main objectives of the Process Planning and Preparation
Step are:

  • Project identification
  • Project plan: lnTent, Timing, Team, Tasks. Tools (5T)
  • Analysis boundaries: What is included and excluded from the analysis
  • Identification of baseline FMEA with lessons learned
  • Basis for the Structure Analysis step

1.2 PFMEA Project Identification and Boundaries

PFMEA Project identification includes a clear understanding of what needs to be evaluated. This involves a decision-making process to define the PFMEAs that are needed for a customer program. What to exclude can be just as important as what to include in the analysis. Below are some basic questions that help identify PFMEA projects.

  • What is the customer buying from us?
  • Are there new requirements?
  • What specific process/elements cause a risk in imparting the requirement/characteristic?
  • Does the customer or company require a PFMEA?
  • Do we make the product and have design control?
  • Do we buy the product and still have design control?
  • Do we buy the product and do not have design control?
  • Who is responsible for the interface design?
  • Do we need a system, subsystem, component, or other level of analysis?

Answers to these questions and others defined by the company help create the list of DFMEA projects needed. The PFMEA project list assures consistent direction, commitment and focus. The following may assist the team in defining PFMEA boundaries, as available:

Legal requirements

  • Technical requirements
  • Customer wants/needs/expectation (external and internal customers)
  • Requirements specification
  • Diagrams (Block/boundary/System)
  • Schematics, drawings, and/or 3D models
  • Bill of Materials (BOM), Risk Assessment
  • Previous FMEA for similar products
  • Error proofing requirements, Design for Manufacturability and Assembly (DFM/DFA)
  • QFD Quality Function Deployment

Preparation needs to be established at the start of the process to assure consistent direction and focus, e.g., an entire process line, process item / process element. Processes within the plant that can impact the product quality and can be considered for PFMEA analysis include: receiving processes, part and material storage, product and material delivery, manufacturing, assembly, packaging, labeling, completed product transportation, storage, maintenance processes, detection processes and rework and repair processes, etc.

Demonstration of the process for narrowing the Preparation

The following may be considered in defining the scope of the PFMEA as appropriate:

  • Novelty of technology /degree of innovation
  • Quality/Reliability History (In-house. zero mileage, field failures, warranty and policy claims for similar products)
  • Complexity of Design
  • Safety of people and systems
  • Cyber-Physical System (including cybersecurity)
  • Legal Compliance
  • Catalog and standard parts

Items that may assist in determining whether an existing PFMEA should be included in the final scope:

  • New development of products and processes.
  • Changes to products or processes
  • Changes to the operating conditions
  • Changed requirements (laws/regulations, standards/norms, customers, state of the art)
  • Manufacturing experience, 0 km issues, or field issues/Warranty
  • Process failures that may result in hazards
  • Findings due to internal product monitoring
  • Ergonomic issues
  • Continuous Improvement

1.3 PFMEA Project Plan

A plan for the execution of the PFMEA should be developed once the DFMEA project is known. It is recommended that the 5T method (InTent. Timing, Team, Tasks. Tool) be used. The organization also needs to factor in development of the applicable Customer Specific Requirements) (CSRs) methods and/or deliverables into the project plan. The plan for the PFMEA
helps the company be proactive in starting the PFMEA early. The DFMEA activities (5-Step process) should be incorporated into the overall project plan.

1.4 Identification of the Baseline PFMEA

Part of the preparation for conducting the PFMEA is knowing what information is already available that can help the cross-functional team. This includes use of a foundation PFMEA, similar product PFMEA, or product foundation PFMEA. The foundation PFMEA is a specialized foundation process FMEA for products that generally contain common or consistent product boundaries and related functions. For a new product in the foundation, added to this foundation PFMEA would be the new project specific components and functions to complete the new product’s PFMEA. The additions for the new product may be in the foundation PFMEA itself, or in a new document with reference to the original family or foundation PFMEA. If no baseline is available. then the team will develop a new PFMEA.

1.5 Process FMEA Header

During Preparation, the header of the PFMEA document should be filled out. The header may be modified to meet the needs of the organization and includes some of the basic PFMEA Preparation information as follows:

  • Company Name: Name of Company Responsible of PFMEA
  • Manufacturing Location: Geographical Location
  • Customer Name: Name of Customer(s) or Product Family
  • Model Year / Program[s): Customer Application or Company Model /Style .
  • Subject: Name of PFMEA project
  • PFMEA Start Date: Start Date
  • PFMEA Revision Date: Latest Revision Date
  • Cross-Functional Team: Team: Team Roster needed
  • PFMEA ID Number: Determined by Company
  • Process Responsibility: Name of PFMEA owner
  • Confidentiality Level: Business Use, Proprietary, Confidential
Example of Completed PFMEA Header Preparation (Step 1)

Step 2 : Structure Analysis

2.1 Purpose

The purpose of Process Structure Analysis is to identify and breakdown the manufacturing system into Process items, Process steps, and Process Work Elements. The main objectives of a Process Structure Analysis are:

  • Visualization of the analysis scope
  • Structure tree or equivalent: process flow diagram
  • Identification of process steps and sub-steps
  • Collaboration between customer and supplier engineering teams (interface responsibilities)
  • Basis for the Function Analysis step

A Process Flow Diagram or a Structure Tree helps define the process and provide the basis for Structure Analysis. Formats may vary by company including the use of symbols, symbol type and their meaning. A Process FMEA is intended to represent the process flow as it physically exists when “walking the process” describing the flow of the product through the process. Function Analysis (Step 3) should not begin until Structure Analysis {Step 2) is complete.

2.2 Process Flow Diagram

A Process Flow Diagram is a tool that can be used as an input to the Structure Analysis.

Process Flow Diagram

2.3 Structure Tree

The structure tree arranges system elements hierarchically and illustrates the dependency via the structural connections. This pictorial structure, allows for an understanding of the relationships between Process Items, Process Steps and Process Work Elements. Each of these is a building block that will later have functions .and failures added.

Example of Structure Analysis Structure Tree (Electrical Meter Assembly line)

The Process Item of the PFMEA is the highest level of the structure tree or process flow diagram and PFMEA. This can also be considered the end result of all of the successfully completed Process Steps.

Process Item

The Process Step is the focus of the analysis. Process Step. is a manufacturing operation or station.

Process Steps

The Process Work Element is the lowest level of the process flow or structure tree. Each work element is the name of a main category of potential causes that could impact the process step. The number of categories may vary by company. ( e.g., 4M, 5M, 6M. etc. and is commonly called the lshikawa Approach.) A process step may have one or more categories with each analyzed separately.
4M Categories: Machine, Man, Material (Indirect) EnvironMent (Milieu)

Additional Categories could be Method, Measurement

STRUCTURE ANALYSIS (STEP 2)
1. Process Item, System, Subsystem, Part Element or Name of Process2.Process Step Station No. and Name of Focus Elements3.Process Work Element 4M Type
Electrical Motor Assy Line[OP 30] Sintered Bearing Press-In ProcessOperator
Electrical Motor Assy Line[OP 30] Sintered Bearing Press-In ProcessPress Machine
Example of Structure Analysis Form Sheet
  1. Process Item: The highest level of integration within the scope of analysis.
  2. Process Step: The element in focus. This is the item that is topic of consideration of the failure chain.
  3. Process Work Element: The element that is the next level down the structure from the focus element.

2.4 Collaboration between Customer and Supplier engineering teams (interface responsibilities):

The output of the Structure Analysis (visualization of the process flow) provides a tool for collaboration between customers and suppliers (including machine suppliers) during technical reviews-of- the process design and/or PFMEA project.

2.5 Basis for Function Analysis

The information defined during Step 2 Structure Analysis will be used to develop Step 3 Function Analysis. If process elements (operations) are missing from the Structure Analysis they will also
be missing from the Function Analysis.

Step 3 Function Analysis

3.1 Purpose

The purpose of the Process Function Analysis is to ensure that the intended functions/requirements of the product/process are appropriately allocated. The main objectives of a Process Function Analysis are:

  • Visualization of product or process functions
  • Function tree/net or equivalent process flow diagram
  • Association of requirements or characteristics to functions
  • Collaboration between engineering teams (systems, safety, and components)
  • Basis for the Failure Analysis step

3.2 Function

A function describes what the process item or process step is intended to do. There may be more than one function for each process item or process step. Prior to beginning the Function Analysis, information to be gathered could include but is not limited to; product and process functions, product/process requirements. manufacturing environment conditions, cycle time, occupational or operator safety requirements, environmental impact. etc. This information is important in defining the “positive” functions and requirements needed for the Functional Analysis. The description of a Function needs to be clear. The recommended phrase format is to use an action verb followed by a l to describe the measurable process function (“DO THIS“ “TO THIS”). A Function should be in the PRESENT TENSE; it uses the verb‘s base form (e.g., deliver, contain, control, assemble, transfer).

Examples: Drill hole, apply glue, insert pin, weld bracket

The Function of the Process Item begins at a high level and references the Process Item in the Structure Analysis. As a high-level description, it can take into account functions such as: internal function. external function, customer related function and/or end user function.

Example: Assemble components
The Function of the Process Step describes the resulting product features produced at the station.

Example: Press in sintered bearing to pole housing

The Function of the Process Work Element reflects the contribution to the Process Step to create the process / product features.

Note: The negative of these examples will be the Failure Effects.

Example: Get sintered bearing from chute manually

Example: Press force to press sintered bearing into pole housing

For the logical linking of a function and structure, questions are asked as:
“What does it do?”
How to achieve the product 1/ process requirements – from right to left
(Process Item → Process Step → Process Work Element)

“How?”

Why implement the product /process requirements from left to right
(Process Work Element →Process Step →Process Item)

3.3 Requirements (Characteristics)

A Characteristic is a distinguishing feature (or quantifiable attribute) of a product. For example, a diameter or surface finishes. For PFMEA. Requirements are described in terms of Product Characteristics and Process Characteristics.
Note: The negative of these will be the Failure Mode and the Failure Cause.
A Product Characteristic (Requirement) is related to the performance of a process function and can be judged or measured. A product characteristic is shown on a product drawing or specification document e.g., Geometry, Material, Surface Finish, Coatings. etc. Process functions create product characteristics. The design documents comprehend legal requirements (e.g. lead-free material), industry requirements (e.g., thread class), customer requirements (e.g., quantity), and internal requirements (e.g. part cleanliness). Product characteristics can be measured after the product has been made (e.g., gap), Product Characteristics can come from performance requirements, e.g., legal (performance of windshield wipers). In these cases, the measurable Product Characteristic should be listed, followed by the Performance Requirement, e.g., Spline Over-pin Diameter (Government Windshield Wiper Regulation XYZ). The specific quantitative value is optional for the PFMEA form sheet.

  • Product Characteristics: May be derived from various sources, external and internal.
  • Legal requirements: Compliance with designated health & safety-and environmental protection regulations
  • Industry Norms and Standards: ISO 9001, VDA Volume 6 Part 3, Process Audit, SAE J1739
  • Customer Requirements: According to customer specifications, e.g. adherence to
    required quality, manufacture and provision of products in time x and quantity y (output z/hour)
  • Internal Requirements: Manufacture of the product, in process cycle, compliance with expected production costs (e.g.,. facilities availability. limited rejects. no corrective work), production system principles, process quality and cleanliness instructions
  • Process Characteristics: A Process Characteristic is the process control that ensures the Product Characteristic is achieved by the process. It may be shown on manufacturing drawings or specifications (including operator work instructions, set-up instructions, error-proofing verification procedures, etc.). Process characteristics can be measured while the product is being made (e.g., press force). The specific quantitative value is optional for the PFMEA form sheet.
Example of Parameter Diagram of Press in Sintered Bearing

3.4 Visualization of functional relationships

The interaction of process item functions, process step functions and process work element functions may be visualized as function network, function structure, function tree, and/or function analysis depending on the software tool used to perform the PFMEA. For example, Function Analysis is contained in the Form Sheet to perform the PFMEA.

Example of Function Analysis Structure Tree
1. Function of the Process Item Function of System, Subsystem, Part Element or Process2. Function at the Process Step, and Product Characteristic (Quantitative value is optional)3.  Function at the Process Work Element and Process Characteristic
Your Plant: Assembly of shaft into pole housing assembly
Ship to Plant: Assembly of motor to vehicle door
End User: Window raises and lowers
Press in sintered bearing to achieve axial position in pole housing to max gap per printMachine presses sintered bearing into the pole housing seat until the defined axial position
Example of Function Analysis Form Sheet

The column header numbering (1, 2, 3) coding are included to help show alignment between the Structure Analysis and associated content of the Function Analysis. You work from left to right answering the question: “How is the higher level function enabled by lower level functions?”

3.5 Collaboration between Engineering Teams (Systems, Safety, and Components)

Engineering teams within the company need to collaborate to make sure information is consistent fer a project or customer program especially when multiple PFMEA teams are simultaneously conducting the technical risk analysis. For example, design information from systems, safety. and/or component groups helps the PFMEA team understand the functions of the product they manufacture. This collaboration, may be verbal (program meetings) or written as a summary.

3.6 Basis for Failure Analysis

Complete definition of process functions (in positive words) will lead to a comprehensive Step 4 Failure Analysis because the potential failures are ways the functions could fail (in negative words).

Step 4 Failure Analysis

4.1 Purpose

The purpose of the Process Failure Analysis is to identify failure causes, modes, and effects, and show their relationships to enable risk assessment. The main objectives of a Process Failure Analysis are:

  • Establishment of the Failure Chain
  • Potential Failure Effects. Failure Modes. Failure Causes for each process function.
  • Identification of process failure causes using a fishbone diagram (4M) or failure network
  • Collaboration between customer and supplier (Failure Effects)
  • Basis for the documentation of failures in the FMEA form sheet and the Risk Analysis step.

A failure analysis is performed for each element/step in the process description (Structure Analysis/Step 2 and Function Analysis/Step 3}.

4.2 Failures

Failures of a process step are. deduced from product. and process characteristics. Examples include:

  • Non-conformities
  • Inconsistently or partially executed tasks
  • Unintentional activity
  • Unnecessary activity

4.3 The Failure Chain

For a specific failure, there are three-aspects to be considered:
Failure Effect (FE), Failure Mode (FM), Failure Cause (FC)

Theoretical failure chain model

4.4 Failure Effects

Failure Effects are related to functions of the process item (System, Subsystem. Part Element or Name of Process). Failure Effects are described in terms of what the customer might notice or experience. Failures that could impact safety or cause noncompliance to regulations should be clearly identified in the PFMEA. Customers could be:

  • Internal customer (next operation/subsequent operation/operation targets)
  • External customer (Next Tier Level/OEM/dealer)
  • Legislative bodies
  • Product or Product and user/operator .

Failure Effects are given a Severity rating according to:

  1. Your Plant: the effect of the failure mode assuming the| defect is detected in the plant (what action will the plant take, e.g. scrap)
  2. Ship-to plant: the effect of the failure mode assuming the defect is not detected before shipping to the next plant (what action will the next plant take. e.g.. sort)
  3. End user: the effect of the process item effect (what will the end user notice, feel, hear, smell etc., e.g. window raises too slow)

The following questions should be asked to help determine the potential impact of failure effects:

1. Does the failure mode physically impact downstream processing or cause potential harm to equipment or operators? This includes an inability to assemble or join to a mating component at any subsequent customer‘s facility. Iif so, then identify the manufacturing impact “Your Plant“ and/or “ship-to plant“ in the PFMEA. If not, then go to question 2. Examples could include:

  • Unable to assemble at operation at operation x
  • Unable to attach at customer facility
  • Unable to connect at customer facility
  • Cannot bore at operation X
  • Causes excessive tool wear at operation X
  • Damages equipment at operation X
  • Endangers operator at customer facility

Note; When parts cannot be assembled there is no impact to the End User and question 2 does not apply.

2.What is the potential impact on the End User? Independent of any controls planned or implemented including error or mistake-proofing, consider what happens to the process item that leads to what the End User would notice or experience. This information may be available within the DFMEA. If an effect is carried from the DFMEA, the description of the product effects in the PFMEA should be consistent with those in the corresponding DFMEA.
NOTE: In some cases, the team conducting the analysis may not know the end user effect (e.g.. catalogue parts. off- the-shelf products. Tier 3 components). When this information is not known. the effects should be defined in terms of the part function and/or process specification.

Examples could include:

  • Noise
  • High effort
  • Unpleasant odor
  • Intermittent operation
  • Water leak
  • Rough idle
  • Unable to adjust
  • Difficult to control
  • Poor appearance
  • Regulatory System Function reduced or failed
  • End user lack of vehicle control
  • Safety effect on end user

3. What would happen if a failure effect was detected prior to reaching the end user? The failure effect at the current or receiving locations also needs to be considered. Identify the manufacturing impact “Your Plant” and/or “ship-to plant” in the PFMEA.
Examples could include:

  • Line shutdown
  • Stop shipment
  • Yard hold
  • 100% of product scrapped
  • Decreased line speed
  • Added manpower to maintain required line rate
  • Rework and repair

4.5 Failure Mode

A (Process) Failure Mode is defined as the manner in which the process could cause the product not to deliver or provide the intended function. The team should assume that the basic design of the product is correct; however, if there are design issues which result in process concerns. those issues should be communicated to the design team for resolution. Assume that the failure mode could occur but may not necessary occur. Failure modes should be described in technical terms, not as a symptom noticeable by the customer. Verification of completeness of the failure modes can be made through a review of past thing‘s gone wrong, reject or scrap reports, and group brainstorming. Sources for this should also include a comparison of similar processes and a review of customer (end user and subsequent operation) claims relating to similar components. There are several categories of potential failure modes including:

  • Loss of process function/operation not performed
  • Partial function – incomplete operation
  • Degradation of process function
  • Overachieving process function – Too much too high.
  • Intermittent process function – operation not consistent
  • Unstable operation .
  • Unintended process function – wrong operation
  • Wrong part installed
  • Delayed process function – operation too late

Typical failure modes could be, but are not limited to:

  • Hole too shallow, too deep. missing or off location.
  • Dirty surface
  • Surface finish too smooth
  • Misaligned connector pins
  • Connector not fully seated
  • Pass a bad part, or reject a good part, bypass inspection operation
  • Label missing
  • Barcode not readable
  • ECU flashed with wrong software.

4.6 Failure Cause:

A failure cause is an indication of why a failure mode could occur.The consequence of a cause is the failure mode. Identify, to the extent possible, every potential manufacturing or assembly cause for each failure mode. The cause should be listed as concise and complete as possible so that efforts (controls and actions) can be aimed at appropriate causes. Typical failure causes may include the classic lshikawa‘s 4M, but are not limited to:

  • Man: set-up worker, machine operator/ associate, material associate, maintenance technician etc.
  • Machine/Equipment: Robot, hopper reservoir tank. injection molding machine, spiral conveyor, inspection devices, fixtures, etc.
  • Material (Indirect): machining oil, installation grease. washer concentration, (aid for operation), etc.
  • EnvironMent (Milieu): ambient conditions such as heat, dust, contamination, lighting, noise, etc.

Note: In preparing the FMEA. assume that the incoming parts/materials are correct. Exceptions can be made by the FMEA team where historical data indicate deficiencies in incoming part quality.

One method to help reveal/uncover failure causes is to have a facilitator that leads the team through “Thought Provoking Stimulation /Questions.“ These questions can be broad category
questions, enough to stimulate the process experts thought process, while keeping the number of questions to a manageable level. Questions can be process specific and broken down into the
4M categories. Initial list of questions can be formed by reviewing the Failure Causes in previous PFMEA’s.
Example – Assembly Process:

4.6.1 Man

  • From parts available within the process, can wrong part be applied?
  • Can no part be applied?
  • Can the parts be loaded incorrectly?
  • Can parts be damaged – From pickup to application?
  • Can wrong material be used?

4.6.2 Machine

  • Can automated process be interrupted?
  • Can inputted data be entered incorrectly?
  • Can machine be run in manual mode, bypassing automated controls?
  • Is there a schedule to confirm prevention and detection controls?

4.6.3 Material (indirect)

  • Can too much/ too little / no material be used?
  • Can material be applied to a wrong location?

4.6.4- EnvironMent (Milieu)

  • Is lighting adequate for task?
  • Can parts used within the process, be considered foreign material?

The description of the failure. cause needs to be clear. Terms such as “defective, broken.” “operator failure,” “non-fulfillment or “not OK” and so on are insufficient to comprehensively assign the failure cause and mode and to determine actions.

4.7 Failure Analysis

Based on the process steps, the failures are derived and failure chains (i.e., Failure structure/failure trees/failure network) are created from the function analysis. The focus element of the failure structure is the Failure Mode, with its associated Failure Effects and potential Failure Causes. Depending on the focus, a failure can be viewed as a Failure Effect, a Failure Mode, or a Failure Cause. To link failure cause to a Failure Mode, the question should be “Why is the Failure Mode occurring?”
To link failure effects to a Failure Mode, the question should be “What happens in the event of a Failure Mode?“

Example of Failure Analysis Structure Tree
FAILURE ANALYSIS (STEP 4)
1. Failure Effects (FE) to the Next Higher Level Element and/or End User2. Failure Mode (FM) of the Focus Element3. Failure Cause (FC) of the Work Element
Your Plant: Clearance too small to assemble shaft without potential damage
Ship to Plant: Assembly of motor to vehicle door requires additional insertion force with potential damage
End user: Comfort closing time too long.
Axial position of sintered bearing is not reachedMachine stops before reaching final position
Example of Failure Analysis Form Sheet

Begin building the failure chain by using the information in the Function Analysis. When using a customer specific form sheet or software. follow the methodology as defined by your customer.
1.Failure Effects (FE): The effect of failure associated with “Next Higher Level Element and/or End User” in the Function Analysis.
Note for spreadsheet users: A potential failure mode may have more than one failure effect. Failure effects are grouped in the spreadsheet in order to avoid excessive duplication of the same failure modes and causes.

2.Failure Mode (FM): The mode (or type) of failure associated with the “Focus Element” in the Function Analysis.
Note for spreadsheet users: It is recommended that users start with the failure mode and then identify related failure effects using the information in the #1 Function of the Process Item column of the Function Analysis section because some or all categories may apply.

3. Failure Cause (FC): The cause of failure associated with the “Work Element and Process Characteristics in the Function Analysis.

4.8 Relationship between PFMEA and DFMEA

A design failure of a feature (product characteristic) can cause a failure fer one or more product functions. The corresponding process failure is the inability of the process to manufacture the same feature as designed. The failure to conform to a product characteristic alone leads to the Failure Effect. Only in this case is the Failure Effect in the Design FMEA the same as in the Process
FMEA. All Failure Effects which are caused by a failure of the processes and which are not identified in Design FMEA have to be newly defined and assessed in the Process FMEA. The Failure Effects related to the product, system, and/or end user and their associated seventies should be documented when known, but not assumed. The key to the identification of Failure Effects and associated severity is the communication of the involved parties and the understanding of differences and similarities of the analyzed failures in DFMEA and PFMEA. Figure below shows a potential interrelation of product-related Failure Effects, Failure Modes and Failure Causes from the “End User” level to the level of production (PFMEA level).
Note: The expectation of the relative time of and the flow of information from the DFMEA to the PFMEA is different in non-standard development flows, such as where development of a “standard” process precedes development of the products that will be manufactured using it. In such cases, the appropriate timing and flow of information between these FMEAs should be defined by the organization.

Relationship between PFMEA and DFMEA

4.9 Failure Analysis Documentation

After the Structure Analysis, Function Analysis and Failure Analysis are complete a structure tree or spreadsheet can have multiple views.

4.10 Collaboration between Customer and Supplier (Failure Effects)

The output of the Failure Analysis may be reviewed by customers and suppliers prior to the Risk Analysis step or after to the Risk Analysis step based on agreements with the customer and need
for sharing with the supplier.

4.11 Basis for Risk Analysis

Complete definition of potential failures will lead to a complete Step 5 Risk Analysis because the rating of Severity, Occurrence, and Detection are based on the failure descriptions. The Risk Analysis may be incomplete if potential failures are too vague or missing.

Step 5 Risk Analysis

5.1 Purpose:

The purpose of Process Risk Analysis is to estimate risk by evaluating Severity, Occurrence and Detection, in order to prioritize the need for actions. The main objectives of the Process Risk Analysis are:

  • Assignment of existing and/or planned controls and rating of failures
  • Assignment of Prevention Controls to the Failure Causes
  • Assignment of Detection Controls to the Failure Causes and/or Failure Modes
  • Rating of Severity, Occurrence and Detection for each failure chain
  • Evaluation of Action Priority
  • Collaboration between customer and supplier (Severity)
  • Basis for the Optimization step

There are two different Control Groups: Current Prevention Controls, and Current Detection Controls.

5.2 Current Prevention Controls (PC)

5.2.1 Process planning

Definition: Current Prevention Controls facilitate optimal process planning to minimize the possibility of failure occurrence.
Prevention of possible layout deficiencies of the production facility:

  • Test runs according to start-up regulation AV 17/3b

5.2.2 Production process

Definition: Eliminate (prevent) the failure cause or reduce its rate of occurrence.
Prevention of defectively produced parts in the production facility:

  • Two-handed operation of machines
  • Subsequent part cannot be attached (Poke-Yoke)
  • Form-dependent position
  • Equipment maintenance
  • Operator maintenance
  • Work instructions /Visual aids
  • Machine controls
  • First part release

Failure Causes are rated for occurrence, taking into account the effectiveness of the current prevention control. Current Prevention Controls describe measures which should be implemented in the design process and verified during prototype, machine qualifications (run-off), and process verification prior to start of regular production. Prevention Controls may also include standard work instructions, set-up procedures, preventive maintenance, calibration procedures, error-proofing verification procedures, etc.

5.3 Current Detection Controls- (DC)

Definition: Current Detection controls detect the existence of a failure cause or the failure mode, either by automated or manual methods, before the item leaves the process or is shipped to the
customer.

Examples of Current Detection controls:

  • Visual inspection
  • Visual inspection with sample checklist
  • Optical inspection with camera system
  • Optical test with limit sample
  • Attributive test with mandrel
  • Dimensional check with a caliper gauge
  • Random inspection
  • Torque monitoring
  • Press load monitoring
  • End of line function check
Prevention and Detection In the Process FMEA
Road-map of process understanding

5.4 Current Prevention and Detection Controls

Current Prevention and Detection Controls should be confirmed to be implemented and effective. This can be done during an in-station review (e.g. Line Side Review, Line walks and Regular audits). If the control is not effective, additional action may be needed. The Occurrence and Detection ratings should be reviewed when using data from previous processes, due to the possibility of different conditions for the new process.

5.5 Evaluations

Each Failure Mode, Cause and Effect relationship (failure-chain or net) is assessed for its independent risk. There are three rating criteria for the evaluation of risk:

  • Severity (S): stands for the Severity of the Failure Effect
  • Occurrence (O): stands for the Occurrence of the Failure Cause
  • Detection (D): stands for the Detection of the occurred Failure Cause and/or Failure Mode.

Evaluation numbers from 1 to 10 are used for S, O, and D respectively, in which 10 stands for the highest risk contribution.
NOTE: It is not appropriate to compare the ratings of one team’s FMEA with the ratings of another team’s FMEA, even if the product/process appear to be identical, since each team’s environment is unique and thus their respective individual ratings will be unique (i.e., the ratings are subjective)

5.6 Severity (S)

Severity is a rating number associated with the most serious effect for a given failure mode for the process step being evaluated. it is a relative rating within the scope of the individual FMEA and is determined without regard for Occurrence or Detection. Fer process-specific effects, the Severity rating should be determined using the criteria in evaluation Table given. The table may be augmented to include corporate or product line specific examples. The evaluations of the Failure Effects should be mutually agreed to by the customer and the organization.
NOTE: If the customer impacted by a Failure Mode is the next manufacturing or assembly plant or the product user, assessing the severity may lie outside the immediate process engineer’s team’s field of experience or knowledge. In these cases, the Design FMEA, design engineer, and/or subsequent manufacturing or assembly plant process engineer, should be consulted in order to comprehend the propagation of effects.

Process General Evaluation Criteria Severity (S)
Potential Failure Effects rated according to the criteria below.Blank until filled in by user
SEffectImpact to Your PlantImpact to Ship-to Plant  (when known)Impact to End User (when known)Corporate or Product Line  Examples
10Very HighFailure may result In an acute health and/or safety risk for the  manufacturing or assembly workerFailure may result in an acute health and/or  safety risk for the manufacturing or assembly workerAffects safe operation of the vehicles and / or other vehicles, the health of the drivers or passengers or road users or pedestrians. 
9Failure may result in in- plant regulatory noncomplianceFailure may result in in- plant regulatory noncomplianceNoncompliance with regulations 
8High100% of production run affected may have to be scrapped. Failure may result in in-plant regulatory noncompliance or may have a chronic health and/or safety risk for the manufacturing or assembly workerLine shutdown greater than full production shift; stop shipment possible; field repair or replacement required (Assembly to end users) other than regulatory noncompliance or may have a chronic health and/or safety risk for the manufacturing or assembly workerLoss of primary vehicle function necessary for normal driving during expected service life. 
7Product may have to be sorted and a portion (Less than 100%) scrapped; deviation form primary process ; decreased line speed or added manpowerLine shutdown from 1 hour up to full production shift; stop shipment possible; field repair or replacement required (Assembly to End User) other than for regulatory noncomplianceDegradation of primary vehicle function necessary for normal driving during expected service life. 
6Moderate100% of production run may have to be reworked off line and acceptedLine shutdown up to one hourLoss of secondary vehicle function. 
5A portion of the production run may have to be reworked off line and acceptedLess than 100% of product affected; strong possibility for additional defective product: sort required: no line  shutdownDegradation of secondary vehicle function 
4100% of production run may have to be reworked in station before it is processedDefective product triggers significant reaction plan; additional defective products not likely; sort not requiredVery objectionable, appearance, sound, vibration, harshness. Or haptics. 
3LowA portion of production run may have to be reworked in station before it is processedDefective product triggers minor reaction plan; additional defective products not likely; sort not requiredModerately objectionable, appearance, sound, vibration, harshness, or haptics. 
2Slight inconvenience to process operation or operatorDefective product triggers no reaction plan; additional defective products not likely; sort not required; requires feedback to supplierSlightly objectionable, appearance, sound, vibration, harshness, or haptics. 
1Very lowNo discernible effectNo discernible effectNo discernible effect 
PFMEA SEVERITY (s)

5.7 Occurrence (O)

The Occurrence rating (O) describes the occurrence of Failure Cause in the process, taking into account the associated current prevention controls. The occurrence rating number is a relative rating within the scope of the FMEA and may not reflect the actual occurrence. The Occurrence rating describes the potential of the failure cause to occur. according to the rating table. without regard to the detection controls. Expertise or other experiences with comparable processes. for example, can be. considered in the assessment of the rating numbers. In determining this rating, questions such as the following should be considered:

  • What is the equipment history with similar processes and process steps?
  • What is the field experience with similar processes?
  • Is the process a carryover or similar to a previous process?
  • How significant are changes from a current production process?
  • Is the process completely new?
  • What are the environmental changes?
  • Are best practices already implemented?
  • Do standard instructions exist? (e.g.. work instructions. set-up and calibration procedures, preventive maintenance, error-proofing verification procedures, and process monitoring
    verification checklists)
  • Are technical error-proofing solutions implemented? (e.g.,product or process design, fixture and tool design,established process sequence. production control tracking/traceability, machine capability, and SPC charting)
Occurrence Potential (O) for process
Potential Failure Causes rated according to the-criteria below. Consider Prevention Controls when determining the best Occurrence estimate. Occurrence is a predictive qualitative rating made at the time of evaluation and may not reflect the actual occurrence. The occurrence rating number is a relative rating within the scope of the in by FMEA (process being evaluated). For Prevention Controls with multiple Occurrence Ratings, use the rating that best reflects the robustness of the control.Blank until filled in by user
OPrediction of Failure Cause OccurringType of controlsOccurrence criteria – DFMEACorporate or Product Line  Examples
10Extremely HighNoneNo prevention controls 
9Very HighBehaviouralPrevention controls will have little effect in preventing failure cause. 
8 
7HighBehavioural or TechnicalPrevention controls somewhat effective in preventing failure cause. 
6 
5ModeratePrevention controls are effective in preventing failure cause. 
4 
3LowBest practices: Behavioural or TechnicalPrevention controls are highly effective in preventing failure cause. 
2Very low 
1Extremely lowTechnicalPrevention controls are extremely effective in preventing failure cause from occurring due to design (e.g., part geometry) or process (e.g.  fixture or tooling design}. Intent of prevention controls – Failure Mode cannot be physically produced due to the Failure Cause. 
Prevention Control Effectiveness: Consider if prevention controls are technical (rely on machines, tool life, tool material, etc), or use bast practices (fixtures, tool design, calibration procedures, error- proofing verification, preventive maintenance. work instructions, statistical process control charting,  process monitoring, product design, etc.) or behavioural (rely on certified or non-certified operators,  skilled trades, team leaders, etc.) when determining how effective the prevention controls will be.
PFMEA OCCURRENCE (0)

5.8 Detection (D)

Detection is the rating associated with a prediction of the most effective process control from the listed detection-type process controls. Detection is a relative rating, within the scope of the individual FMEA and is determined without regard for Severity or Occurrence. Detection should be estimated using the criteria in Table given. This table may be augmented with examples of
common detection methods used by the company. The intent of the term “control discrepant product” Ranks 3 and 4 is to have controls/systems/procedures in place that controls the discrepant-product in such a manner, that the probability of the product escaping the facility is very low. The controls start from when the product is identified as discrepant to the point of final disposition. These controls usually exceed controls that are used for discrepant products with higher Detection Ranks. After implementation of any unproven control. the effectiveness can be verified and re-evaluated, in determining this estimate, questions such as the following should be considered:

  • Which test is most effective in detecting the Failure Cause or the Failure Mode?
  • What is the usage Profile! Duty Cycle required detecting the failure?
  • What sample size is required to detect the failure?
  • Is the test procedure proven for detecting this Cause-Failure Mode?
Detection Potential (D) for tile Validation of the Process Design
Detection Controls rated according to Detection Method Maturity and Opportunity for Detection.Blank until filled in by user
DAbility to DetectDetection Method MaturityOpportunity for DetectionCorporate or Product Line  Examples
10Very LowNo testing or inspection method has been established or is knownThe failure mode will not or cannot be detected 
9It is unlikely that the testing or inspection method will detect the failure mode.The failure mode is not easily detected through random or sporadic audits. 
8LowTest or inspection method has not been proven to be effective and reliable ( e.g., plant has little or no experience with method, gauge R & R results marginal on comparable process or this application etc.Human inspection (visual, tactile, audible) or use of manual gauging (attribute or variable) that should detect the failure mode or failure cause 
7Machine-based detection (automated or semi-automated with notification by light, buzzer, etc}. or use of inspection equipment such as a coordinate measuring machine that should detect failure mode or failure Cause 
6ModerateTest or inspection Method has been proven to be effective and reliable (e.g. plant has experienced with method; gauge R & R results are acceptable on comparable process or this application, etc.)Human inspection (visual, tactile, audible), or use of manual gauging (attribute or variable) that will detect the failure mode or failure cause (including product sample checks). 
5Machine-based detection (semi-automated with notification by light, buzzer, etc), or use of Inspection equipment such as a coordinate measuring machine that will detect failure mode or failure cause {including product sample checks). 
4HighSystem has been proven to be effective and reliable (e.g. plant has experience with method on identical process or this application), gauge R&R results are acceptable.Machine-based automated detection method that will detect the failure mode downstream, prevent further processing or system will identify the product as discrepant and allow it to automatically move forward in the process until the designated reject unload area. Discrepant product will be controlled by a robust system that will prevent outflow of the product from the facility. 
3Machine-based automated detection method that will detect the failure mode in-station, prevent further processing or system will identify the product as discrepant and allow it to automatically move forward in the process until the designated reject unload area. Discrepant product will be controlled by a robust system that will prevent outflow of the product from the facility. 
2Detection method has been proven to be effective and reliable (e.g. plant has experience with method, error-proofing verifications, etc}.Machine-based detection method that will detect the cause and prevent the failure mode (discrepant part} from being produced. 
1Very HighFailure mode cannot be physically produced as-designed or processed, or detection methods proven to always detect the failure mode or failure cause. 
PFMEA DETECTION (D)

5.9 Action Priority (AP)

Once the team has completed the initial identification of failure modes and effects, causes and controls, including ratings fer severity, occurrence. and detection, they must decide if further efforts are needed to reduce the risk. Due to the inherent limitations on resources, time, technology. and other factors, they must choose how to best prioritize these efforts. It accounts for all 1000 possible combinations of S,O, and D. it was created to give more emphasis on severity first, then occurrence,then detection. This logic follows the failure-prevention intent of FMEA. The AP table offers a suggested high-medium-low priority for action. Companies can use a single system to evaluate action priorities instead of multiple systems required from multiple customers. Risk Priority Numbers are the product of S x O x D and range from 1 to 1000. The RPN distribution can provide some information about the range of ratings, but RPN alone is not an adequate method to determine the need for more actions since RPN gives equal weight to S, O, and D. For this reason, RPN could result in similar risk numbers for very different combinations of S, O, and D leaving the team uncertain about how to prioritize. When using RPN it is recommended to use an additional method to prioritize like RPN results such as S x O. The use of a Risk Priority Number (RPN) threshold is not a recommended practice for determining the need for actions. Risk matrices can represent combinations of S and O, S and D, and O and D. These matrices provide a visual representation of the results of the analysis and can be used as an input to prioritization of actions based on company-established criteria, not included in this publication. Since the AP Table was designed to work with the Severity, Occurrence, and Detection tables provided in this handbook. if the organization chooses to modify the S,O,D. tables for specific products, processes, or projects, the AP table should also be carefully reviewed. Note: Action Priority rating tables are the same for DFMEA and PFMEA, but different for FMEA—MSR.

Priority High (H): Highest priority for review and action. The team needs to either identify an appropriate action to improve prevention and/or detection controls or justify and document why current controls are adequate.

Priority Medium (M): Medium priority for review and action. The team should identify appropriate actions to improve prevention and for detection controls, or, at the discretion of the company, justify and document why controls are adequate.
Priority Low (L): Low priority for review and action. The team could identify actions to improve prevention or detection controls, it is recommended that potential Severity 9-10 failure effects with Action Priority High and Medium, at a minimum, be reviewed by management including any recommended actions that were taken.
This is not the prioritization of High, Medium, or Low risk, it is the prioritization of the need for actions to reduce risk.
Note: It may be helpful to include a statement such as “No further action is needed” in the Remarks field as appropriate.

Action Priority (AP) for DFMEA and PFMEA
Action Priority is based on combinations of Severity, Occurrence, and Detection ratings in order to prioritize actions for risk reduction.Blank until filled in by user
EffectSPrediction of Failure Cause occurringOAbility to detectDAction Priority (AP)Comments
Product or Plant Effect Very high9-10Very high8-10Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1H 
High6-7Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1H 
Moderate4-5Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1M 
Low2-3Low – Very low7-10H 
Moderate5-6M 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
Product or Plant Effect high7-8Very high8-10Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1H 
High6-7Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1M 
Moderate4-5Low – Very low7-10H 
Moderate5-6M 
High2-4M 
Very high1M 
Low2-3Low – Very low7-10M 
Moderate5-6M 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
Product or Plant Effect Moderate4-6Very high8-10Low – Very low7-10H 
Moderate5-6H 
High2-4M 
Very high1M 
High6-7Low – Very low7-10M 
Moderate5-6M 
High2-4M 
Very high1L 
Moderate4-5Low – Very low7-10M 
Moderate5-6L 
High2-4L 
Very high1L 
Low2-3Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
Product or Plant Effect low2-3Very high8-10Low – Very low7-10M 
Moderate5-6M 
High2-4L 
Very high1L 
High6-7Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Moderate4-5Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Low2-3Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
No discemible effect1Very low- Very high1-10Very high – Very low1-10L 
Table AP – ACTION PRIORITY FOR DFMEA and PFMEA
Example of PFMEA with Risk Analysis Form Sheet

5.10 Collaboration between Customer and Supplier (Severity)

The output of the Risk Analysis creates the mutual understanding of technical risk between customers and suppliers. Methods of collaboration range from verbal to formal reports. The amount of information shared is based on the needs of a project, company policy, contractual agreements. and so on. The information shared depends on the placement of the company in the supply chain. Some examples are listed below.

  1. The OEM may compare design functions, failure effects, and severity from a vehicle-level DFMEA with the Tier 1 supplier PFMEA.
  2. The Tier 1 supplier communicates necessary information about product characteristics on product drawings and/or specifications, or other means, including designation of standard or special characteristics and severity. This information is used as an input to the Tier 2 supplier PFMEA as well as the Tier 1‘s internal PFMEA. When the design team communicates the associated risk of making product characteristics out of specification the process team can build in the appropriate level of prevention and detection controls in manufacturing.

5.11 Basis for Optimization

The output of Steps 1, 2, 3, 4, and 5 of the 7-Step FMEA process is used to determine if additional design or testing action is needed. The process reviews, customer reviews, management reviews, and cross-functional team meetings lead to Step 6 Optimization.

Step 6: Optimization.

6.1 Purpose

The purpose of the Process Optimization Step is to determine actions to mitigate risk and assess the effectiveness of these actions. The end result is a process which minimizes the risk of producing and delivering products that do not meet the customer and stakeholder expectations. The main objectives of a Process Optimization are:

  • Identification of the actions necessary to reduce risks
  • Assignment of responsibilities and deadlines for action I implementation
  • Implementation and documentation of actions taken including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken.
  • Collaboration between the FMEA team, management, customers, and suppliers regarding potential failures.
  • Basis for refinement of the product and/or process requirements and prevention and detection controls

The primary objective of optimization is to develop actions that reduce risk by improving the process. In this step, the team reviews the results of the risk analysis and assigns actions to lower the occurrence of the failure cause or increase the ability to detect the failure cause or failure mode. Actions may also be assigned which improve the process but do not necessarily lower the risk assessment rating. Actions represent a commitment to take a specific, measurable, and achievable action, not potential actions which may never be implemented. Actions are not intended to be used for activities that are already planned as these are documented in the Prevention or Detection Controls, and are already considered in the initial risk analysis. All actions
should have a responsible individual and a target completion time associated with the action. If the team decides that no further actions are necessary, “No further action is needed“ is written in the Remarks field to show the risk analysis was completed. The PFMEA can be used as the basis for continuous improvement of the process. The optimization is most effective in the following order:

  • Process modifications to eliminate or mitigate a Failure Effect (FE)
  • Process modifications to reduce the Occurrence (O) of the Failure Cause (FC).
  • Increase the Detection (D) ability fer the Failure Cause (FC) or Failure Mode (FM).
  • In the case of process modifications, all impacted process steps are evaluated again.

In the case of concept modifications. all steps of the FMEA are reviewed for the affected sections. This is necessary because the original analysis is no longer valid since it was based upon a different manufacturing concept. The PFMEA can be used as the basis for continuous improvement of the process.

6.2 Assignment of Responsibilities

Each action should have a responsible individual and a Target Completion Date (TCD) associated with it. The responsible person ensures the action status is updated. If the action is confirmed this person is also responsible for the action implementation. The Actual Completion Date for Preventive and Detection Actions is documented including the date the actions are implemented.
Target Completion Dates should be realistic (i.e., in accordance with the product development plan. prior to process validation prior to start of production).

6.3 Status of the Actions

Suggested levels for Status of Actions:

  • Open: No action defined.
  • Decision pending (optional): The action has been defined but has not yet decided on. A
    decision paper is being created.
  • Implementation pending (optional): The action has been decided on but not yet implemented.
  • Completed: Completed actions have been implemented and their effectiveness has been demonstrated and documented. A final evaluation has been done.
  • Not Implemented: Not Implemented status is assigned when a decision is made not to implement an action. This may occur when risks related to practical and technical limitations are beyond current capabilities.

The FMEA is not considered “complete” until the team assesses. each item’s Action Priority and either accepts the level of risk or documents closure of all actions. If “No Action Taken,” then Action Priority is not reduced, and the risk of failure is carried forward into the product. Actions are open loops that need to be closed in writing.

6.4 Assessment of Action Effectiveness

When an action has been completed, Occurrence. and Detection values are reassessed. and a new Action Priority may be determined. The new action receives a preliminary Action Priority rating as a prediction of effectiveness. However. the status of the action remains “implementation pending” until the effectiveness has been tested. After the tests are finalized the preliminary rating has to be confirmed or adapted, when indicated. The status of the action is then changed from “implementation pending” to “completed.” The reassessment should be based on the effectiveness of the Preventive and Detection Actions taken and the new values are based on the definitions in the Process FMEA Occurrence and Detection rating tables.

6.5 Continual Improvement

The PFMEA serves as a historical record for the process. Therefore, the original Severity, Occurrence, and Detection (S, O, D) numbers need to be visible or at a minimum available and accessible as part of version history. The completed analysis becomes a repository to capture the progression of process decisions and design refinements. However, original S, O, D ratings may be. modified for foundation, family or generic PFMEA’s because the information is used as a starting point for an process specific analysis.

6.6 Collaboration between the FMEA team, Management, Customers, and Suppliers regarding Potential Failures:

Communication between the FMEA team, management, customers and suppliers during the development of the technical risk analysis and/or when the PFMEA is initially complete brings people together to improve their understanding of product and process functions and failures. In this way, there is a transfer of knowledge that promotes risk reduction.

Example of PFMEA Optimization with new Risk Evaluation Form Sheet

Step 7: Results Documentation

7.1 Purpose

The purpose of the results documentation step is to summarize and communicate the results of the Failure Mode and Effects analysis activity. The main objectives of Process Results Documentation are:

  • Communication of results and conclusions of the analysis
  • Establishment of the content of the documentation
  • Documentation of actions taken including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken
  • Communication of actions taken to reduce risks, including within the organization. and with customers and/or suppliers as appropriate
  • Record of risk analysis and risk reduction to acceptable levels.

7.2 FMEA Report

The-scope and results of an FMEA should be summarized in a report. The report can he used for communication purposes within a company, or between companies. The report is not meant to replace reviews of the PFMEA details when requested by management, customers, or suppliers. It is meant to be a summary for the PFMEA team and others to confirm completion of each of the tasks and review the results of the analysis. It is important that the content of the documentation fulfills the requirements of the organization, the intended reader, and relevant stakeholders. Details may be agreed upon between the parties. In this way. it is also ensured that all details of the analysis and the intellectual property remain at the developing company. The layout of the document may be company specific. However, the report should indicate the technical risk of failure as a part of the development plan and project milestones. The content may include the following:

  1. A-statement of final status compared to original goals established in the Project Plan
    • FMEA Intent-Purpose of this FMEA?‘
    • FMEA Timing- FMEA due date?
    • FMEA Team- List of participants?
    • FMEA Task- Scope of this FMEA?
    • FMEA Tool- How do we conduct the analysis Method used?
  2. A summary of_the scope of the analysis and identify what is new.
  3. A summary of how the functions were developed.
  4. A summary of at least the high-risk failures as determined by the team and provide a copy of the specific S/O/D rating tables and method of action prioritization (i.e., Action Priority table).
  5. A summary of the actions taken and/or planned to address the high-risk failures including status of those actions.
  6. A plan and commitment of timing for ongoing FMEA improvement actions.
    • Commitment and timing to close open actions.
    • Commitment to review and revise the PFMEA during mass production to ensure the accuracy and completeness of the analysis as compared with the production design (e.g. revisions triggered from design changes, corrective actions. etc., based on company procedures).
    • Commitment to capture “things gone wrong” in foundation PFMEA’s for the benefit of future analysis reuse, when applicable.
Standard PFMEA Form Sheet
Alternate PFMEA Form Sheet
Alternate PFMEA Form Sheet
Alternate PFMEA Form Sheet
Alternate PFMEA Form Sheet
PFMEA -Software View

PFMEA Form Sheet Hints: Step 7

PFMEA Step 7 is independently handled by each organization and is not recorded on the PFMEA form sheet.

AIAG & VDA Design Failure Mode and Effect Analysis

Step 1 : Planning and Preparation

1.1 Purpose

The purpose of the Design FMEA Planning and Preparation Step is to define which FMEAs will be done for a project, and to define what is included and excluded in each FMEA based on the type of analysis being developed. i.e.. system, subsystem or component. The main objectives of Design FMEA Planning and Preparation are:

  • Project identification
  • Project plan: lnTent, Timing, Team, Tasks, Tools (5T)
  • Analysis boundaries: What is included and excluded from the analysis
  • Identification of baseline FMEA with lessons learned
  • Basis for the Structure Analysis step

1.2 DFMEA Project Identification and Boundaries

DFMEA Project identification includes a clear understanding of what needs to be evaluated. This involves a decision-making process to define the DFMEAs that are needed for a customer program. What to exclude can be just as important as what to include in the analysis. Below are some basic questions that help identify DFMEA projects.

  • What is the customer buying from us?
  • Are there new requirements?
  • Does the customer or company require a DFMEA?
  • Do we make the product and have design control?
  • Do we buy the product and still have design control?
  • Do we buy the product and do not have design control?
  • Who is responsible for the interface design?
  • Do we need a system. subsystem. component. or other level of analysis?

Answers to these questions and others defined by the company help create the list of DFMEA projects needed. The DFMEA project list assures consistent direction, commitment and focus. The following may assist the team in defining DFMEA boundaries as applicable:

  • Legal requirements
  • Technical requirements
  • Customer wants/needs/expectation (external and internal customers)
  • Requirements specification
  • Diagrams (Block/Boundary) from similar project
  • Schematics. drawings, and/or 3D models
  • Bill of materials (BOM). risk assessment
  • Previous FMEA for similar products
  • Error proofing requirements, Design for Manufacturability and Assembly (DFMEA)
  • QFD Quality Function Deployment

The following may be considered in defining the scope of the DFMEA as appropriate:

  • Novelty of technology/ degree of innovation
  • Quality/reliability history (in-house. zero mileage, field failures, warranty and policy claims for similar product)
  • Complexity of design
  • Safety of people and systems
  • Cyber-physical system (including Cyber security)
  • Legal compliance
  • Catalog & standard parts

1.3 DFMEA Project Plan
A plan for the execution of the DFMEA should be developed once the DFMEA project is known. It is recommended that the 5T method (lnTent, Timing, Team,Tasks, Tool) be used. The plan for the DFMEA helps the company be proactive in starting the DFMEA early. The DFMEA activities (5-Step process) should be incorporated into the overall project plan.

1.4 Identification of the Baseline DFMEA
Part of the preparation for conducting the DFMEA is knowing what information is already available that can help the cross-functional team. This includes use of a foundation DFMEA , similar product DFMEA, or product family DFMEA. The family DFMEA is a specialized foundation design FMEA for products that generally contain common or consistent product boundaries and related functions. For a new product in the family, the new project specific components and functions to complete the new product’s DFMEA would be added to the family FMEA. The additions for the new product may be in the family DFMEA itself, or in a new document with reference to the original family or foundation DFMEA If no baseline. is available, then, the team will develop a new DFMEA.

1.5 DFMEA Header
During the Planning and Preparation Step, the header of the DFMEA document should be filled out. The header may be modified to meet the needs of the organization. The header includes some of the basic DFMEA scope information as follows:

  • Company Name: Name of Company Responsible for DFMEA
  • Engineering Location: Geographical Location
  • Customer Name: Name of Customer(s) or Product
  • Model Year / Program(s): Customer Application or Company Model /Style
  • Subject: Name of DFMEA Project (System, Subsystem and/or Component)
  • DFMEA Start Date: Start Date
  • DFMEA Revision Date: Latest Revision Date.
  • Cross-Functional Team: Team Roster needed
  • DFMEA ID Number: Determined by Company.
  • Design Responsibility: Name of DFMEA owner
  • Confidentiality Level: Business Use, Proprietary, Confidential
Example of Completed DFMEA Header Planning and Preparation

1.6 Basis for Structure Analysis
The information gathered during Step 1 Planning and. Preparation will be used to develop Step 2 Structure Analysis.

Step 2 : Structure Analysis

2.1 Purpose
The purpose of Design Structure Analysis is to identify and breakdown the FMEA scope into system, subsystem, and component parts for technical risk analysis. The main objectives of a Design Structure Analysis are:

  • Visualization of the analysis scope
  • Structure tree or equivalent: block diagram, boundary diagram, digital model, physical parts
  • Identification of design interfaces, interactions, close clearances
  • Collaboration between customer and supplier engineering teams (interface responsibilities)
  • Basis for the Function Analysis step

2.2 System Structure
A system structure is comprised of system elements. Depending on the scope of analysis, the system elements of a design structure can consist of a system, subsystems, assemblies. and components. Complex structures may be split into several structures (work packages) or different layers of block diagrams and analyzed separately for organizational reasons or to ensure sufficient clarity. A system has a boundary separating it from other systems and the environment. Its relationship with the environment is defined by inputs and outputs. A system element is a distinct component of a functional item. not a function, a requirement or a feature.

2.3 Define the Customer
There are two-major customers to be considered in the FMEA analysis:

  • END USER: The individual who uses a product after it has been fully developed and marketed.
  • ASSEMBLY and MANUFACTURING: the locations where manufacturing operations (e.g., powertrain, stamping and fabricating) and vehicle! product assembly and production material processing takes place. Addressing the interfaces between the product and its assembly process is critical to an effective FMEA analysis. This may be any subsequent or downstream operation or a next Tier manufacturing process.

Knowledge of these customers can help to define the functions, requirements and specifications more robustly as well as aid in determining the effects of related failure modes.

2..4 Visualize System Structure
A visualization of the system structure helps the DFMEA team develop the structural analysis. There are various tools which may be used by the team to accomplish this. Two methods commonly used

  • Block/Boundary Diagrams
  • Structure Tree

2.4.1 Block/Boundary Diagram
Block/Boundary Diagrams are useful tools that depict the system under consideration and its interfaces with adjacent systems, the environment and the customer. The diagram is a graphic
representation that provides guidelines for structured brainstorming and facilitates the analysis of system interfaces as a foundation for a Design FMEA. The diagram below shows the physical and logical relationships between the components of the product. It indicates the interaction of components and subsystems within the scope of the design as well as those interfaces to the product Customer, Manufacturing, Service,Shipping. etc. The diagram identifies persons and things that the design interacts with during its useful life. The Boundary Diagram can be used to identify the Focus Elements to be assessed in the Structure Analysis and Function Analysis. The diagram may be in the form of boxes connected by lines, with each box corresponding to a major component of the product. The lines correspond with how the product components are related to, or interface with each other, with arrows at the end point(s) to indicate the direction of flow. Interfaces between elements in the Boundary Diagram can be included as Focus Elements in the Structure and Function Analysis Structure Tree. There are different approaches and formats to the construction of a Block/Boundary Diagram, which are determined by the organization. The terms “Block Diagram” and “Boundary Diagram” are used interchangeably. However. the Boundary Diagram tends to be more comprehensive due to the inclusion of external influences and system interactions. In the context of the DFMEA, Block/boundary Diagrams define the analysis scope and responsibility and provides guidelines for structured brainstorming. The scope of analysis is defined by the boundaries of the system; however. interfaces with external factors/systems are to be addressed.

  • Defines scope of analysis (helps to identify potential team members)
  • Identifies internal and external interfaces
  • Enables application of system, sub-system. and component hierarchy

When correctly constructed, Block/Boundary Diagrams provide detailed information to the P-Diagram and the FMEA. Although Block/Boundary diagrams can be constructed to any level of
detail, it is important to identify the major elements, understand how they interact with each other, and how they may interact with outside systems. Block/boundary Diagrams are steadily refined as the design matures. The steps involved in completing a Block/boundary Diagram may be described as follows:

  1. Describe components and features
    • Naming the parts and features helps alignment within the team, particularly when features have ‘“nicknames’
    • All system components and interfacing components shown
  2. Reorganize blocks to show linkages :
    • Solid line for direct contact
    • Dashed line for indirect interfaces. e.g. clearances or relative motion
    • Arrows indicate direction
    • All energy flows /signal or force transfers identified.
  3. Describe connections
    • Consider all types of interfaces, both desired and undesired:
      • P – Physically touching (welded. bolted. clamped, etc.)
      • E – Energy transfer (Torque (Nm), heat. etc.)
      • I-information transfer (ECU, sensors, signals, etc.)
      • M-Material exchange (Cooling fluid, exhaust gases,etc.)
  4. Add interfacing systems and inputs (persons and things). The following should be included:
    • Adjacent systems— including systems that are not physically touching your system but may interact with it. require clearance, involve motion. or thermal exposure.
    • The customer/or end user
    • Arrows indicate direction
  5. Define the boundary (What parts are within “the span of control of the team? What is new or modified?). Only parts designed or controlled by the team are inside the boundary. The blocks within the boundary diagram are one level lower than the level being analyzed. Blocks within the boundary may be marked to indicate items that are not part of the analysis.
  6. Add relevant details to identify the diagram.
    • System, program and team identification
    • Key to any colors or line styles used to identify different types of interactions
    • Date and revision level
Example of Block/Boundary Diagram

2.4.2 Interface Analysis
An interface analysis describes the. interactions between elements of a system. There are five primary types of interfaces:

  • Physical connection (e.g., brackets, belts, clamps and various, types of connectors)
  • Material exchange (e.g., compressed air, hydraulic fluids or any other fluid or material exchange)
  • Energy transfer (e.g., heat transfer, friction or motion transfer such as chain links or gears)
  • Data exchange (e.g., computer inputs or outputs, wiring- harnesses, electrical signals or any other types of information exchange, cyber security items)
  • Human-Machine (e.g., controls, switches, mirrors, displays, warnings, seating, entry/exit)

Another type of interface may be described as a physical clearance between parts, where there is no physical connection. Clearances may be static and/or dynamic. Consider the interfaces between subsystems and components in addition to the content of the sub-systems and components themselves. An interface analysis documents the nature (strong/weak/none/beneficial/harmful) and type of relationships (Physical, Energy, Information or Material Exchange) that occur at all internal and external interfaces graphically displayed in the Block/Boundary Diagram. Information from an interface analysis provides valuable input to a Design FMEA, such as the primary functions or interface functions to be analyzed with potential causes/mechanisms of failure due to effects from neighboring systems and environments. Interface analysis also provides input to the P-Diagram on ideal functions. and noise factors.

2.4.3 Structure Trees
The structure tree arranges system elements hierarchically and illustrates the dependency via the structural connections. The clearly structured illustration of the complete system is thereby guaranteed by the fact that each system element exists only once to prevent redundancy. The structures arranged under each System Element are independent sub-structures. The interactions between System elements may be described as functions and represented by function nets. There is always a system element present even if it is only derived from the function and cannot yet be specified more clearly.

Example of Structure Analysis Structure Tree
Structural Analysis (Step 2)
1. Next Higher Level2. Focus Elements3. Next Lower Level or Characteristic Type
Window Lifter MotorCommutation SystemBrush Card Base Body
Example of Structure Analysis Form Sheet
  1. Next Higher Level: The highest level of integration. within the scope of analysis.
  2. Focus Element: The element in focus. This is the item that is topic of Consideration of the failure chain.
  3. Next Lower Level or Characteristic Type:The element that is the next level down the structure from the focus element.

2.5 Collaboration between Customer and Supplier
The output of the Structure Analysis (visualization of the design and its interfaces) provides a tool for collaboration between customers and suppliers during technical reviews of the design and/or DFMEA project.

2.6 Basis for Function Analysis.
The information defined during Step 2 Structure Analysis will be used to develop Step 3 Function Analysis. If design elements (items) are missing from the Structure Analysis they will also be missing from the Function Analysis.

Step 3: Function Analysis

3.1 Purpose

The purpose of the Design Function Analysis is to ensure that the functions specified by requirements/specifications are appropriately allocated to the system elements. Regardless of the
tool used to generate the DFMEA, it is critical that the analysis is written in functional terms. The main objectives of a Design Function Analysis are:

  • Visualization of product or process functions
  • Function tree/net or function analysis form sheet and parameter diagram (P-diagram)
  • Cascade of customer (external and internal) functions with associated requirements
  • Association of requirements or characteristics to functions
  • Collaboration between engineering teams (systems, safety, and components)
  • Basis for the Failure Analysis step

The structure provides the basis so that each System Element may be individually analyzed with regard to its functions and requirements. For this, comprehensive knowledge of the system and the operating conditions and environmental conditions of the system are necessary, for example, heat, cold, dust, splash water, salt, icing, vibrations. electrical failures. etc.

3.2 Function
A function describes what the item/system element is intended to do. A function is to be assigned to a system element. Also, a system element can contain multiple functions. The description of a function needs to be clear. The recommended phrase format is to use an “action verb“ followed by a “noun” to describe a measurable function. A Function should be in the “PRESENT TENSE”; it uses the verb’s base form (e.g., deliver, contain, control, assemble, transfer).
Examples: deliver power, contain fluid, control speed, transfer heat, color black. Functions describe the relationship between the input and output of an item system element with the aim of fulfilling a task.
Note: A component (i.e., a part or item in a part list) may have a purpose/ function where there is no input/output. Examples such as a seal, grease, clip, bracket, housing, connector, flux etc. have functions and requirements including material, shape, thickness, etc. In addition to the primary functions of an item, other functions that may be evaluated include secondary functions such as interface functions, diagnostic functions, and serviceability functions

Input/Interface/Output Flow

3.3 Requirements
Requirements are divided into two groups: functional requirements and non-functional requirements. A functional requirement is a criterion by which the intended performance of the function is judged or measured (e.g., material stiffness). A non-functional requirement is a limitation on the freedom for design decision (e.g., temperature range). Requirements may be derived from various sources, external and internal, these could be:
Legal requirements: for eg Environmentally friendly product design, suitable for recycling, safe in the event of potential misuse by the operator. non-flammable, etc.
Industry Norms and Standards: for eg ISO 9001, VDA Volume 6 Part 3, Process audit, SAE J1739, ISO 26262 Functional Safety.
Customer Requirements:Explicit (i.e., in customer specification) and implicit (i.e., freedom from prohibited materials) – under all specified conditions

Internal Requirements:Product Specific (i.e., Requirements Specifications, manufacturability, suitability for testing, compatibility with other existing products, reusability, cleanliness, generation, entry and spreading of particles)
Product Characteristics: A distinguishing feature (or quantifiable attribute) of a product such as a journal diameter or surface finish.

3.4 Parameter Diagram (P-Diagram)
Parameters are considered to be attributes of the behavior of a function. A Parameter (P) Diagram is a graphical representation of the environment in which an item exists. A P-Diagram includes factors which influence the transfer function between inputs and outputs, focusing on design decisions necessary to optimize output. A P-Diagram is used to characterize the behavior of a system or component in the context of a single function. P-Diagram-s are not required for all functions. Teams should focus on a few key functions affected by new conditions and those with history of robustness issues in previous applications. More than one P- Diagram may be needed in order to illustrate the function(s) of the system or component that are of concern to the FMEA Team.

The complete functional description forms the basis for subsequent failure analysis and risk mitigation. A P-Diagram focuses on achievement of function. It clearly identifies all influences on that function including what can be controlled (Control Factors), and what cannot reasonably be controlled (Noise Factors). The P-Diagram. completed for specific Ideal Functions. assists in the identification of:

  • Factors, levels. responses and signals necessary for system optimization
  • Functions which are inputs to the DFMEA
  • Control and Noise factors which could affect functional performance
  • Unintended system outputs (Diverted Outputs)
  • Information gained through developing a P-Diagram provides input to the test plan.

Referring to Figure below, the output (grey area) of the item/ System Element often deviates/varies from the desired behavior (straight line). The control factors act on the design to achieve as close as practical to the desired behavior.

Example of system behavior

A Parameter Diagram consists of dynamic inputs (including signals), factors that could affect system performance (control and noise), sources of variation, and outputs (intended outputs and
unintended/diverted outputs). The following is an example of a Parameter Diagram which is used to assess the influences on a function of a product including:

  • Input (What you want to put in to get the desired result) is a description of the sou fees required for fulfilling the system functionality.
  • Function (What you want to happen) is described in a Parameter Diagram with an active verb followed by a measurable noun in the present tense and associated with requirements.
  • Functional Requirements (What you need to make the function happen) are related to the performance of a function
  • Control Factors (What you can do to make it happen) which can be adjusted to make the design more insensitive to noise (more robust) are identified. One type of Control Factor is a Signal Factor. Signal Factors are adjustment factors, set directly or indirectly by a user of a system, that proportionally change the system response (e.g., brake pedal movement changes stopping distance). Only dynamic systems utilize signal factors. Systems without signal factors are called static systems.
  • Non-Functional Requirements (What you need beside the functional requirements) which limit the design option.
  • Intended Output (What you want from the system) are ideal, intended functional outputs whose magnitude may (dynamic system) or may not (static system) be linearly proportional to a signal factor (e.g., low beam activation for a headlamp. stopping distance as a function of brake pedal movement).
  • Unintended Output (What you don’t want from the system) are malfunctioning behaviors or unintended system outputs that divert system performance from the ideal intended function. For example, energy associated with a brake system is ideally transformed into friction. Heat, noise and vibration are examples of brake energy diverted outputs. Diverted Outputs may be losses to thermal radiation, vibration, electrical resistance, flow restriction. etc.
  • Noise Factors (What interferes with achieving the desired output) are parameters which represent potentially significant sources of variation for the system response and cannot be controlled or are not practical to control from the perspective of the engineer. Noises are described in physical units. Noise factors are categorized as follows:
    • Piece to Piece Variation (in a component and interference between components)
    • Change Over Time (aging over life time. e.g.. mileage, aging, wear)
    • Customer Usage (use out of desired specifications)
    • External Environment (conditions during customer usage.e.g. road type. weather)
    • System Interactions(interference from other systems)
Example of Parameter Diagram with Electrical Motor

3.5 Function Analysis

The interactions of the functions of several System elements are to be demonstrated. for example as a function tree/network or using the DFMEA form sheet. The focus of the analysis cascades from OEM to Tier 1 supplier to Tier N supplier. The purpose of creating a function tree/network or function analysis on the DFMEA form sheet is to incorporate the technical dependency between the functions. Therefore. it subsequently supports the visualization of the failure dependencies. When there is a functional relationship between hierarchically linked functions. Then there is a potential relationship between the associated failures. Otherwise. if there is no functional relationship between hierarchically linked functions, there will also be no potential relationship between the associated failures. For the preparation of the function tree/network. the functions that are involved need to be examined. Sub-functions enable the performance of an overall function. All sub-functions are linked logically with each other in the function structure (Boolean & relationships). A function structure becomes more detailed from top down. The lower level function describes how the higher level function is to be fulfilled. For the logical linking of a function structure, it is helpful to ask:

  • “How is the higher level function enabled by lower level functions?” (Top-Down) and
  • “Why is the lower level function needed?” (Bottom-Up).
Example of Function Analysts Structure Tree

The function structure can be created in the Function Analysis section:

FUNCTION Analysis (Step 3)
1.Next Higher Level Function and Requirement2. Focus Elements Function and Requirement3. Next Lower Level Function and Requirement or Characteristic
Convert electrical energy into mechanical energy according to parameterizationCommutation System transport the electrical current between coil pairs of the electromagnetic convertorsBrush card body transports forces between spring and motor body to hold the brush spring system in x. y, z position (support commutating contact points)
Example of Function Analysis Form Sheet

How is the higher level function enabled by lower level functions?

  1. Next Higher Level Function and Requirement: The function in scope of the Analysis.
  2. Focus Element Function and Requirement: The function of the associated System Element (item in focus) identified in the Structure Analysis.
  3. Next Lower Level Function and Requirement of Characteristic:The function of the associated Component Element identified in the Structure Analysis.

3.6 Collaboration between Engineering Teams (Systems, Safety, and Components)
Engineering teams within the company need to collaborate to make sure information is consistent for a project or customer program especially when multiple DFMEA teams are simultaneously conducting the technical risk analysis. For example, a systems group might be developing the design architecture (structure) and this information would be helpful to the DFMEA to avoid duplication of work. A safety team may be working with the customer to understand the safety goals and hazards. This information would be helpful to the DFMEA to ensure consistent severity ratings for failure effects.

3.7 Basis for Failure Analysis
Complete definition of functions (in positive words) will lead to a comprehensive Step 4 Failure Analysis because the potential failures are ways the functions could fail (in negative words).

Step 4 Failure Analysis

4.1 Purpose
The purpose of the Design Failure Analysis is to identify failure causes, modes, and effects, and show their relationships to enable risk assessment. The main objectives of a Design Failure Analysis are:

  • Establishment of the Failure Chain
  • Potential Failure Effects, Failure Modes. Failure Causes for each product function.
  • Collaboration between customer and supplier (Failure Effects)
  • Basis for the documentation of-failures in the FMEA form sheet and the Risk Analysis step

4.2 Failures

Failures of a function are derived from the function descriptions. There are several types of- potential failure modes including, but not limited to:

  • Loss of function (e.g. inoperable, fails suddenly)
  • Degradation of function (e.g. performance loss overtime)
  • Intermittent function (e.g. operation randomly starts/stops/starts)
  • Partial function (e.g. performance loss)
  • Unintended function (e.g. operation at the wrong time, unintended direction, unequal performance)
  • Exceeding function (e.g. operation above acceptable threshold)
  • Delayed function (e.g. operation after unintended time interval)
Types of Failure Modes

The description of a system and subsystem failure mode is described in terms of functional loss or degradation e.g., steering turns right when the hand wheel is moved left. as an example of an unintended function. When necessary, the operating condition of the vehicle should be included e.g. loss of steering assist during start up or shut down. A component/part failure mode is comprised of a noun and a failure description e.g., seal twisted. It is critical that the description of the failure is clear and understandable for the person who is intended to read it. A statement “not fulfilled,” “not OK,” “defective.” “broken“ and so on is not sufficient. More than one failure may be associated with a function. Therefore. the team should not stop as soon as one failure is
identified. They should ask “how else can this fail?“

Definition of Failure

4.3 The Failure Chain

There are three different aspects of a Failure analyzed in an FMEA:

  • Failure Effect (FE)
  • Failure Mode (FM)
  • Failure Cause (FC)
Theoretical failure chain model

4.4 Failure Effects

A Failure Effect is defined as the consequence of a failure mode. Describe effects on the next level of product integration (internal or external), the end user who is the vehicle operator (external), and government regulations (regulatory) as applicable. Customer effects should state what the user might notice or experience including those effects that could impact safety. The intent is to forecast the failure effects consistent with the team’s level of knowledge. A failure mode can have multiple effects relating to internal and external customers. Effects may be shared by OEMs with suppliers and suppliers with sub-suppliers as part of design collaboration. The severity of failure effects is evaluated on a ten-point scale. Examples of failure effects on the end user:

  • No discernible effect
  • Poor appearance e.g.. unsightly close-out. color fade, cosmetic corrosion.
  • Noise e.g.. misalignment/rub. fluid-borne noise. squeak/rattle, chirp, and squawk
  • Unpleasant odor, rough feel, increased efforts
  • Operation impaired. intermittent. unable to operate. electromagnetic incompatibility (EMC)
  • External leak resulting in performance loss. erratic operation,unstable
  • Unable to drive vehicle (walk home)
  • Noncompliance with government regulations
  • Loss of steering or braking

NOTE: In some cases. the team conducting the analysis may not know the end user effect, e.g.. catalogue parts. off- the-shelf products. Tier 3 components. When this information is not known, the effects should be defined in terms of the part function and specification. In these cases. the system integrator is responsible for ensuring the correct part for the application is selected. e.g.. auto,truck, marine, agriculture.

4.5 Failure Mode
A Failure Mode is defined as the manner in which an item could fail to meet or deliver the intended function. The Failure Modes are derived from the Functions. Failure Modes should be described in technical terms and not necessarily as symptoms noticeable by the customer. In preparing the DFMEA, assume that the design will be manufactured and assembled to the design intent. Exceptions can be made at the team‘s discretion where historical data indicates deficiencies exist in the manufacturing process. Examples of component-level failure modes include, but are not limited to:

Examples of system-level failure modes include, but are not limited to:

  • Complete fluid loss
  • Disengages too fast
  • Does not disengage
  • Does not transmit torque
  • Does not hold full torque
  • Inadequate structural support
  • Loss of structural support
  • No signal
  • Intermittent signal
  • Provides too much pressure/signal/voltage
  • Provides insufficient pressurefsignal/voltage
  • Unable to withstand load/temperature/vibration

4.6 Failure Cause
A Failure Cause is an indication of why the failure mode could occur. The consequence of a cause is the failure mode. Identify, to the extent possible, every potential cause for each failure mode. The consequences of not being robust to noise factors (found on a P-Diagram) may also be Failure Causes. The cause should be listed as concise and complete as possible so that remedial
efforts (controls and actions) can be aimed at appropriate causes. The Failure Causes can be derived from the Failure modes, of the next lower level function and requirement and the potential noise factors (e. g, from a Parameter Diagram). Types of potential failure causes could be, but are not limited to:

  • Inadequate design for functional performance (e.g., incorrect material specified, incorrect geometry, incorrect part selected for application, incorrect surface finish specified, inadequate travel specification, improper friction material specified, insufficient lubrication capability, inadequate design life assumption, incorrect algorithm, improper maintenance instructions, etc.)
  • System interactions (e.g., mechanical interfaces, fluid flow, heat sources. controller feedback, etc.)
  • Changes overtime (e.g., yield, fatigue, material instability, creep, wear, corrosion, chemical oxidation, electromigration, over-stressing, etc.)
  • Design inadequate for external environment (e.g., heat, cold, moisture, vibration, road debris, road salt, etc.)
  • End user error or behavior (e.g., wrong gear used, wrong pedal used, excessive speeds, towing, wrong fuel type, service damage, etc.)
  • Lack of robust design for manufacturing (e.g., part geometry allows part installation backwards or upside down, part lacks distinguishing design features, shipping container design
  • causes parts to scratch or stick together, part handling causes damage, etc.)
  • Software Issues (e.g., Undefined state, corrupted ccde/data)

4.7 Failure Analysis
Depending on whether the analysis is being done at the system, sub-system or component level, a failure can be viewed as a failure effect, failure mode, or failure cause. Failure Modes, Failure
Causes, and Failure Effects should correspond with the respective column in the FMEA form sheet. Figure below shows a cascade of design-related failure modes, causes, and effects from the vehicle level to the characteristic level. The focus element (Failure Mode), Causes, and Effects are different depending on the level of design integration. Consequently, a Failure Cause at the OEM becomes a Failure Mode at a next (Tier 1) level. However, Failure Effects at the vehicle level (as perceived by the end user) should be documented when known, but not assumed. Failure Networks may be created by the organization that owns multiple levels of the design. When multiple organizations are responsible for different levels of the design they are responsible to communicate failure effects to the next higher or next lower level as appropriate.

Failure Structure at different levels

To link Failure Cause(s) to a Failure Mode. the question. should be “Why is the Failure Mode happening?“. To link Failure Effects to a Failure Mode, the question should be “What happens in the event of a Failure Mode?”

Example of Failure Analysis Structure Tree

The failure structure can be created in the Failure Analysis section.

FAILURE ANALYSIS (STEP 4)
1.Failure Effect(FE) to the Next Higher Level Element and/or End User2.Failure Mode(FM) of the focus element3 Failure Cause (FC) of the next lower element or characteristics
Torque and rotating velocity of the window lifter motor too lowAngle deviation by commutation system intermittently connects the wrong coils (L1, L3 and L2 instead of L1, L2 and L3)Brush card body bends in contact area of the carbon brush
Example of Failure Analysis Form sheet

Following once again the header numbering (1, 2, 3) and color coding, by inspecting the items in the Function Analysis begin building the Failure Chain.

  1. Failure Effects (FE): The effect of failure associated with the “Next Higher Level Element and/or End User” in the Function Analysis.
  2. Failure Mode (FM): The mode (or type) of failure associated with the “Focus Element” in the Function Analysis.
  3. Failure Cause (FC): The cause of failure associated with the “Next Lower Element or Characteristic” in the Function Analysis..

4.8 Failure Analysis Documentation
The DFMEA Form Sheet can have multiple views once the Structure Analysis. Function Analysis and Failure Analysis are complete.

4.9 Collaboration between Customer and Supplier (Failure Effects)

The output of the Failure Analysis may be reviewed by customers and suppliers prior to the Risk Analysis step or after to the Risk Analysis step based on agreements with the customer and need
for sharing with the supplier.

4.10 Basis fer Risk Analysis
Complete definition of potential failures will lead to a complete Step 5 Risk Analysis because the rating of Severity. Occurrence, and Detection are based on the failure descriptions. The Risk Analysis may be incomplete if potential failures are too vague or missing.

Step 5: Risk Analysis

5.1 Purpose
The purpose of Design Risk Analysis is to estimate risk by evaluating Severity, Occurrence and Detection, and prioritize the need for actions. The main objectives of the Design Risk Analysis are:

  • Assignment of existing and/or planned controls and rating of failures
  • Assignment of Prevention Controls to the Failure Causes
  • Assignment of Detection Controls to the Failure Causes and/or Failure Modes
  • Rating of Severity. Occurrence and Detection for each failure chain
  • Evaluation of Action Priority
  • Collaboration between customer and supplier (Severity)
  • Basis for the Optimization step

5.2 Design Controls
Current design controls are proven considerations that have been established for similar. previous designs. Design control documents are a basis for the robustness of the design. Prevention-type controls and detection-type controls are part of the current library of verification and validation methods. Prevention controls provide information or guidance that is used as an input to the design. Detection controls describe established verification and validation procedures that have been previously demonstrated to detect the failure, should it occur. Specific
references to design features that act to prevent a failure or line items in published test procedures will establish a credible link between the failure and the design control. These prevention and/or detection methods that are necessary, but not part of a current library of defined procedures should be written as actions in the DFMEA.

5.3 Current Prevention Controls (PC)
Current Prevention Controls describe how a potential cause which results in the Failure Mode is mitigated using existing and planned activities. They describe the basis for determining the occurrence rating. Prevention Controls relate back to the performance requirement. For items which have been designed out-of-context and are purchased as stock or catalog items from a supplier, the prevention control should document a specific reference to how the item fulfills the requirement. This may be a reference to a specification sheet in a catalog. Current Prevention controls need to be clearly and comprehensively described, with references cited. if necessary, this can be done by reference to an additional document. Listing a control such as “proven material” or “lessons learned” is not a clear enough indication. The DFMEA team should also consider margin of safety in design as a prevention control. Examples of Current Prevention Controls:

  • EMC Directives adhered to, Directive 89/336/EEC
  • System design according to simulation, tolerance calculation, and Procedure – analysis of concepts to establish design requirements
  • Published design standard for a thread class
  • Heat treat specification on drawing
  • Sensor performance specifications.
  • Mechanical redundancy (fail-safe)
  • Design for test-ability
  • In Design and Material standards (internal and external)
  • Documentation (e.g., records of best practices. lessons learned, etc.) from similar designs
  • Error-proofing (Poke-Yoke design i.e., part geometry prevents wrong orientation)
  • Substantially identical to a design which was validated for a previous application, with documented performance history. (However, if there is a change to the duty cycle or operating conditions, then the carry-ever item requires re-validation in order for the detection control to be relevant.)
  • Shielding or guards which mitigate potential mechanical wear, thermal exposure, or EMC.
  • Conformance to best practices

After completion of the preventive actions the occurrence is verified by the Detection Control(s).

5.4 Current Detection Controls (DC)
Current Detection Controls detect the existence of a failure cause or the failure mode before the item is released for production. Current Detection Controls that are listed in the FMEA represent planned activities (or activities already completed), not potential activities which may never actually be conducted. Current Detection controls need to be clearly and comprehensively described. Listing a control such as “Test” or “Lab Test” is not a clear enough indication of a detection control. References to specific tests, test plans or procedures should be cited as applicable, to indicate that the FMEA team has determined that the test will actually detect the failure mode or cause, if it occurs (e.g.. Test No. 1234 Burst Pressure Test, Paragraph 6.1). Examples of Current Detection controls:

  • Function check
  • Burst test
  • Environmental test
  • Driving test a Endurance test
  • Range of motion studies
  • Hardware in-the-loop
  • Software in-the-Ioop
  • Design of experiments
  • Voltage output lab measurements
    All controls that lead to a detection of the failure cause. or the failure mode are entered into the “Current Detection Controls” column.
Prevention and Detection in the Design FMEA

5.5 Confirmation of Currant Prevention and Detection Controls

The effectiveness of the current prevention and detection controls should be confirmed. This can be done during validation tear down reviews. Such confirmation can be documented within the DFMEA, or within other project documents, as appropriate, according to the team’s normal product development procedure. Additional action may be needed if the controls are proven not to be effective. The occurrence and detection evaluations should be reviewed when using FMEA entries from previous products, due to the possibility of different conditions for the new product.

Roadmap of design under-standing

5.6 Evaluations
Each failure mode, cause and effect relationship is assessed to estimate risk. There are rating criteria for the evaluation of risk:

  • Severity (S): stands for the severity of the failure effect
  • Occurrence (O): stands for the occurrence of the failure cause
  • Detection (D): stands for the detection of the occurred failure cause and/or failure mode.

Evaluation numbers from 1 to 10 are used for S, O and D respectively, where 10 stands for the highest risk contribution.
NOTE: It is not appropriate to compare the ratings of one team’s FMEA with the ratings of another team’s FMEA, even if the product/process appear to be identical, since each team’s environment is unique and thus their respective individual ratings will be unique (i.e., the ratings are subjective).

5.7 Severity (S)
The Severity rating (S) is a measure associated with the most serious failure effect for a given failure mode of the function being evaluated. The rating is used to identify priorities relative to the scope of an individual FMEA and is determined without regard for occurrence or detection. Severity should be estimated using the criteria in the Severity Table. The table may be augmented to include product-specific examples. The FMEA project team should agree on an evaluation criteria and rating system. which is consistent even if modified for individual design analysis. The Severity evaluations of the failure effects should be transferred by the customer to. the supplier. as needed.

Product General Evaluation Criteria Severity (S)
Potential Failure Effects rated according to the criteria below.Blank until filled in by user
SEffectSeverity criteriaCorporate or Product Line  Examples
10Very HighAffects safe operation of the vehicle and/or other vehicles, the health of driver or passengers or road users or pedestrians. 
9Noncompliance with regulations. 
8HighLoss of primary vehicle function necessary for normal driving during expected service life. 
7Degradation of primary vehicle function necessary for normal driving during expected service life. 
6ModerateLoss of secondary vehicle function. 
5Degradation of secondary vehicle function. 
4Very objectionable appearance, sound, vibration, harshness, or haptics. 
3LowModerately objectionable appearance, sound, vibration, harshness, or haptics. 
2Slightly objectionable appearance, sound, vibration, harshness, or haptics. 
1Very lowNo discernible effect 
DFMEA SEVERITY (S} TABLE

5.3 Occurrence (O)
The Occurrence rating (O) is a measure of the effectiveness of the prevention control, taking into account the rating criteria. Occurrence ratings should be estimated using the criteria in the Occurrence Table . The table may be augmented to include product-specific examples. The FMEA project team should agree on an evaluation criteria and rating system. which is consistent, even if modified for individual design analysis (e.g., passenger car, truck, motorcycle, etc). The Occurrence rating number is a relative rating within the scope of the FMEA and may not reflect the actual occurrence. The Occurrence rating describes the potential of the failure cause to occur in customer operation, according to the rating table, considering results of already completed detection controls. Expertise, data handbooks, warranty databases or other experiences in the field of comparable products, for example, can be consulted for the analysis of the evaluation numbers. When failure causes are rated for occurrence, it is done taking into account an estimation of the effectiveness of the current prevention control. The accuracy of this rating depends on how well the prevention control has been described. Questions such as the following may be helpful for a team when trying to determine the appropriate Occurrence rating:

  • What is the service history and field experience with similar components, subsystems, or systems?
  • Is the item a carryover product or similar to a previous level item?
  • How significant are changes from a previous level item?
  • Is the item completely new?
  • What is the application or what are the environmental changes?
  • Has an engineering analysis (e.g. reliability) been used to estimate the expected comparable occurrence rate for the application?
  • Have prevention controls been put in place?
  • Has the robustness of the product been proven during the product development process?
Occurrence Potential (O) for the Product
Potential Failure Causes rated according to the criteria below. Consider Product Experience and Prevention Controls when determining the best Occurrence estimate (Qualitative rating).Blank until filled in by user
OPrediction of Failure Cause OccurringOccurrence criteria – DFMEACorporate or Product Line  Examples
10Extremely HighFirst application of new technology anywhere without operating experience and/or under uncontrolled operating conditions. No product verification and/or validation experience. Standards do not exist and best practices have not yet been determined. Prevention controls not able to predict field performance or do not exist. 
9Very HighFirst use of design with technical innovations or materials within the company. New application or change in duty cycle / operating conditions. No product verification and/or validation experience. Prevention controls not targeted to identify performance to specific requirements. 
8First use of design with technical innovations or materials on a new application. New application or change in duty cycle/operating conditions. No product verification and/or validation experience.   Few existing standards and best practices, not directly applicable for this design. Prevention controls not a reliable indicator of field performance. 
7HighNew design based on similar technology and materials. New application or change in duty cycle /operating conditions. No product verification and/or validation experience. Standards, best practices, and design rules apply to the baseline design, but not the innovations. Prevention controls provide  limited indication of performance 
6Similar to previous designs, using existing technology and materials. Similar application, with changes in duty cycle or operating conditions, Previous testing or field experience. Standards and design rules exist but are insufficient to ensure that the failure cause will not occur. Prevention controls provide some ability to prevent a failure cause. 
5ModerateDetail changes to previous design, using proven technology and materials. Similar application, duty cycle or operating conditions. Previous testing or field experience, or new design with some test experience related to the failure. Design addresses lessons learned from previous designs. Best Practices re-evaluated for this design but have not yet been proven. Prevention controls capable of finding deficiencies in the product related to the failure cause and provide some indication of performance. 
4Almost identical design with short-term field exposure. Similar application with minor change in duty cycle or operating conditions. Previous testing or field experience.   Predecessor design and changes for new design conform to best- practices, standards, and specifications. Prevention controls capable of finding deficiencies in the product related to the failure cause and indicate likely design conformance. 
3LowDetail changes to known design (same application, with minor change in duty cycle or operating conditions) and testing or field experience under comparable operating conditions or new design with successfully completed test procedure. Design expected to conform to Standards and Best Practices, considering Lessons Learned from previous designs. Prevention controls capable of finding deficiencies in the product related to the failure cause and predict conformance of production design. 
2Very lowAlmost identical mature design with long term field exposure. Same application, with comparable duty cycle and operating conditions. Testing or field experience under comparable operating conditions. Design expected to conform to standards and best practices, considering Lessons Learned from previous designs, with  significant margin of confidence. Prevention controls capable of finding deficiencies in the product related to the failure cause and indicate confidence in design conformance. 
1Extremely lowFailure eliminated through prevention control and failure cause is not possible by design 
Product Experience: History of product usage within the company [Novelty of design, application or use case). Results of already completed detection controls provide experience with the design.
Prevention Controls: Use of Best Practices for product design, Design Rules, Company Standards. Lessons Learned, industry Standards, Material Specifications, Government Regulations and effectiveness of prevention oriented analytical tools including Computer Aided Engineering, Math Modelling, Simulation Studies. Tolerance Stacks and Design/Safety Margins
Note: 10,9,8,7 can drop based on product validation activities
DFMEA Occurrence (O)

5.9 Detection (D)
The Detection rating (D) is an estimated measure of the effectiveness of the detection control to reliably demonstrate the failure cause or failure made before the item is released for production. The detection rating is the rating associated with the most effective detection control. Detection is a relative rating, within the scope of the individual FMEA and is determined without regard for severity or occurrence. Detection should be estimated using the criteria in Detection Table.The FMEA project team should agree on an evaluation criteria and rating system, which is consistent. even if modified for individual product analysis. The detection rating is initially a prediction of the effectiveness of any yet unproven control. The effectiveness can be verified and re-evaluated after the detection control is completed. However, the completion or cancellation of a detection control (such as a test) may also affect the estimation of occurrence. In determining this estimate, questions such as the following should be considered:

  • Which test is most effective in detecting the Failure Cause or the Failure Mode?
  • What is the usage Profile/ Duty Cycle required detecting the failure?
  • What sample size is required to detect the failure?
  • Is the test procedure proven for detecting this Cause / Failure Mode?
Detection Potential (D) for tile Validation of the Product Design
Detection Controls rated according to Detection Method Maturity and Opportunity for Detection.Blank until filled in by user
DAbility to DetectDetection Method MaturityOpportunity for DetectionCorporate or Product Line  Examples
10Very LowTest procedure yet to be developed.Test method not defined 
9Test method not designed specifically to detect failure mode or cause.Pass-Fail, Test-to-Fail, Degradation Testing 
8LowNew test method; not proven.Pass-Fail, Test-to-Fail, Degradation Testing 
7ModerateProven test method for verification of functionality or validation of performance, quality, reliability and durability; planned timing is later in the product development cycle such that test failures may result in production delays for re-design andi/r re-tooling.Pass-Fail 
6Test-to-Fail 
5Degradation Testing 
4HighProven test method for verification of functionality or validation of performance, quality, reliability and durability; planned timing is sufficient to modify production tools before release for production.Pass-Fail 
3Test-to-Fail 
2Degradation Testing 
1Very HighPrior testing confirmed that failure mode or cause cannot occur. or detection methods proven to always detect the failure mode or failure cause. 
DFMEA DETECTION (D)

5.10 Action Priority (AP)
Once the learn has completed the initial identification of Failure Modes, Failure Effects, Failure Causes and controls, including ratings for severity. occurrence, and detection. they must decide if
further efforts are needed to reduce the risk. Due to the inherent limitations on resources, time, technology, and other factors, they must choose how to best prioritize these efforts. The Action Priority (AP) method was created to give more emphasis on severity first, then occurrence, then detection. This logic follows the failure-prevention intent of FMEA. The AP table offers a suggested high-medium-low priority for action. Companies can use a single system to evaluate action priorities instead of multiple systems required from multiple customers. Risk Priority Numbers are the product of S x O x D and range from 1 to 1000. The RPN distribution can provide some information about the range of ratings. but RPN alone is not an adequate method to determine the need for more actions since RPN gives equal weight to S. O, and D. For this reason. RPN could result in similar risk numbers for very different combinations of S, O. and D leaving the team uncertain about how to prioritize. When using RPN it is recommended to use an additional method to prioritize like RPN results such as S x 0. The use of a Risk Priority Number(RPN) threshold is not a recommended practice for determining the need for actions. Risk matrices can represent combinations of S and 0, S and D, and 0 and D. These matrices provide a visual representation of the results of the analysis and can be used as an input to prioritization of actions based on company-established .Since the AP Table was designed to work with the Severity, Occurrence, and Detection tables , if the organization chooses to modify the S,0, D tables for specific products, processes, or projects, the AP table should also be carefully reviewed.
Note: Action Priority rating tables are the same for DFMEA and PFMEA, but different for FMEA-MSR.

Priority High (H): Highest priority for review and action. The team needs to either identify an appropriate action to improve Prevention and/or Detection Controls or justify and document why current controls are adequate.

Priority Medium (M): Medium priority for review and action. The team should identify appropriate actions to improve prevention and / or detection controls, or. at the discretion of the company, justify and document Why controls are adequate.

Priority Low (L): Low priority for review and action. The team could identify actions to improve
prevention or detection controls.

It is recommended that potential Severity 9-10 Failure Effects with Action Priority High and Medium, at a minimum, be reviewed by management including any recommended actions that were taken. This is not the prioritization of High, Medium, or Low risk, it is the prioritization of the actions to reduce risk.
Note: It may be helpful to include a statement such as “No further action is needed” in the Remarks field as appropriate.

Action Priority (AP) for DFMEA and PFMEA
Action Priority is based on combinations of Severity, Occurrence, and Detection ratings in order to prioritize actions for risk reduction.Blank until filled in by user
EffectSPrediction of Failure Cause occurringOAbility to detectDAction Priority (AP)Comments
Product or Plant Effect Very high9-10Very high8-10Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1H 
High6-7Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1H 
Moderate4-5Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1M 
Low2-3Low – Very low7-10H 
Moderate5-6M 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
Product or Plant Effect high7-8Very high8-10Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1H 
High6-7Low – Very low7-10H 
Moderate5-6H 
High2-4H 
Very high1M 
Moderate4-5Low – Very low7-10H 
Moderate5-6M 
High2-4M 
Very high1M 
Low2-3Low – Very low7-10M 
Moderate5-6M 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
Product or Plant Effect Moderate4-6Very high8-10Low – Very low7-10H 
Moderate5-6H 
High2-4M 
Very high1M 
High6-7Low – Very low7-10M 
Moderate5-6M 
High2-4M 
Very high1L 
Moderate4-5Low – Very low7-10M 
Moderate5-6L 
High2-4L 
Very high1L 
Low2-3Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
Product or Plant Effect low2-3Very high8-10Low – Very low7-10M 
Moderate5-6M 
High2-4L 
Very high1L 
High6-7Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Moderate4-5Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Low2-3Low – Very low7-10L 
Moderate5-6L 
High2-4L 
Very high1L 
Very low1Very high – Very low1-10L 
No discemible effect1Very low- Very high1-10Very high – Very low1-10L 
Table AP — ACTION PRIORITY FOR DFMEA and. PFMEA
Example of DFMEA. Risk Analysis Form Sheet

5.11 Collaboration between Customer and Supplier (Severity)
The output of the Risk Analysis creates the mutual understanding of technical risk between customers and suppliers. Methods of collaboration range from verbal to formal reports. The amount of information shared is based on the needs of a project, company policy, contractual agreements, and so on. The information shared depends on the placement of the company in the supply chain.Some examples are listed below.

  • The OEM may compare design functions, failure effects, and severity from a vehicle-level DFMEA with the Tier 1 supplier DFMEA.
  • The Tier 1 supplier may compare design functions, failure effects, and severity from a subsystem DFMEA with the Tier 2 supplier who has design responsibility.
  • The Tier 1 supplier communicates necessary information about product characteristics on product drawings and/or specifications, or other means. including designation of standard or special characteristics and severity. This information is used as an input to the Tier 2 supplier PFMEA as well as the Tier 1’s internal PFMEA. When the design team communicates the associated risk of making product characteristics out of specification the process team can build in the appropriate level of prevention and detection controls in manufacturing.

5.12 Basis for Optimization
The output of Steps 1, 2. 3, 4. and 5 of the 7-step FMEA process is used to determine if additional design or testing action is needed. The design reviews, customer reviews, management reviews, and cross-functional team meetings lead to Step 6 Optimization.

Step 6: Optimization

6.1 Purpose
The purpose of the Design Optimization is to determine actions to mitigate risk and assess the effectiveness of those actions. The main objectives of a Design Optimization are:

  • I Identification of the actions necessary to reduce risks
  • Assignment of responsibilities and deadlines for action implementation
  • Implementation and documentation of actions taken including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken
  • Collaboration between the FMEA team, management, customers, and suppliers regarding potential failures
  • Basis for refinement of the product requirements and prevention and detection controls

The primary objective of Design Optimization is to develop actions that reduce risk and increase customer satisfaction by improving the design. In this step, the team reviews the results of the risk analysis and assigns actions to lower the likelihood of occurrence of the Failure Cause or increase the robustness of the Detection Control to detect the Failure Cause or Failure Mode. Actions may also be assigned which improve the design but do not necessarily lower the risk assessment rating. Actions represent a commitment to take a specific, measurable, and achievable action, not potential actions which may never be implemented. Actions are not intended to be used for activities that are already planned as these are documented in the Prevention or Detection Controls and are already considered in the initial risk analysis. If the team decides that no further actions are. necessary. “No further action is needed” is written in the Remarks field to show the risk analysis was completed. The DFMEA should be used to assess technical risks related to continuous improvement of the design. The optimization is most effective in the following order.

  • Design modifications to eliminate or mitigate a Failure Effect (FE).
  • Design modifications to reduce the Occurrence (O) of the Failure Cause (FC)
  • Increase the Detection (D) ability for the Failure Cause (FC) or Failure Mode (FM).
  • In the case of design modifications, all impacted design elements are evaluated again.

In the case of concept modifications. all steps of the FMEA are reviewed for the affected sections. This is necessary because the original analysis is no longer valid since it was based upon a different design concept.

6.2 Assignment of Responsibilities
Each action should have a responsible individual and a Target Completion Date (TCD) associated with it. The responsible person ensures the action status is updated. If the action is confirmed this person is also responsible for the action implementation. The Actual Completion Date for Preventive and Detection Actions is documented including the date the actions are implemented.
Target Completion Dates should be realistic (i.e., in accordance with the product development plan, prior to process validation, prior to start of production).

.6.3 Status of the Actions
Suggested levels for Status of Actions:

  • Open: No action defined.
  • Decision pending (optional): The action has been defined but has not yet been decided on. A decision paper is being created.
  • Implementation pending (optional): The action has been decided on but not yet implemented.
  • Completed:Completed actions have been implemented and their effectiveness has been demonstrated and documented. A final evaluation has been done.
  • Not Implemented: Not implemented status is assigned when a decision is made not to implement an action. This may occur when risks related to practical and technical limitations are beyond current capabilities.

The FMEA is not considered “complete“ until the team assesses each item’s Action Priority and either accepts the level of risk or documents closure of all actions. If “No Action Taken,” then Action Priority is not reduced. and the risk of failure is carried forward into the product design. Actions are open loops that need to be closed in writing.

6.4 Assessment of Action Effectiveness
When an action has been completed, Occurrence and Detection values are reassessed, and a new Action Priority may be determined. The new action receives a preliminary Action Priority rating as a prediction of effectiveness. However, the status of the action remains “implementation pending” until the effectiveness has been tested. After the tests are finalized the preliminary rating has to be confirmed or adapted when indicated. The status of the action is then changed from “implementation pending” to “completed.” The reassessment should be based on the effectiveness of the Preventive and Detection Actions taken and the new values are based on the definitions in the Design FMEA Occurrence and Detection rating tables.

6.5 Continual Improvement
The DFMEA serves as an historical record for the design. Therefore, the original Severity, Occurrence, and Detection (S,O,D) numbers need to be visible or at a minimum available and accessible as part of version history. The completed analysis becomes a repository to capture the progression of design decisions and design refinements. However, originalS, O, D ratings may be modified for foundation. family or generic DFMEA’s because the information is used as a starting point for an application-specific analysis.

Example of DFMEA Optimization with new Risk Evaluation Form Sheet

6.6 Collaboration between the FMEA team, Management, Customers and Suppliers regarding Potential Failures

Communication between the FllA team, management, customers and suppliers during the development of the technical risk analysis and/or when the DFMEA is initially complete brings people together to improve their understanding of product functions and failures. In this way, there is a transfer of knowledge that promotes risk reduction.

Step 7: Results Documentation

7.1 Purpose
The purpose of the Results Documentation step is to summarize and communicate the results of the FMEA activity. The main objectives of Design Results Documentation are:

  • Communication of results and conclusions of the analysis
  • Establishment of the content of the documentation
  • Documentation of actions taken. including confirmation of the effectiveness of the implemented actions and assessment of risk after actions taken
  • Communication of actions taken to reduce risks, including within the organization, and with customers and/or suppliers as appropriate
  • Record of risk analysis and risk reduction to acceptable levels

7.2 FMEA Report
The report may be used for communication purposes within a company. or between companies. The report is not meant to replace reviews of the DFMEA details when requested by management, customers, or suppliers. It is meant to be a summary for the DFMEA team and others to confirm completion of each of the tasks and review the results of the analysis. It is important that the content of the documentation fulfills the requirements of the organization, the intended reader, and relevant stakeholders. Details may be agreed upon between the parties. In this way, it is also ensured that all details of the analysis and the intellectual property remain at the developing company. The layout of the document may be company specific. However, the report should indicate the technical risk of failure as a part of the development plan and project milestones. The content may include the following:

  1. A statement of final status compared to original goals established in Project Plan
    • FMEA lntent— Purpose of this FMEA?
    • FMEA Timing — FMEA due date?
    • FMEA Team — List of participants?
    • FMEA Task — Scope of this FMEA?
    • FMEA Tool – How do we conduct the analysis Method used?
  2. A summary of the scope of the analysis and identify what is new.
  3. A summary of how the functions were developed.
  4. A summary of at least the high—risk failures as determined by the team and provide a copy of the specific S/O/D rating tables and method of action prioritization (e.g. Action Priority table).
  5. A summary of the actions taken and/or planned to address the high-risk failures including status of those actions.
  6. A plan and commitment of timing for ongoing FMEA improvement actions.
    • Commitment and timing to close open actions.
    • Commitment to review and revise the DFMEA during mass production to ensure the accuracy and completeness of the analysis as compared with the production design (e.g. revisions triggered from design changes, corrective actions, etc.. based on company procedures).
    • Commitment to capture “things gone wrong” in foundation DFMEAs for the benefit of future analysis reuse, when applicable.
Standard DFMEA Farm Sheet
Alternate DFMEA Form Sheet
DFMEA Software View

AIAG & VDA Failure Mode and Effect Analysis

Failure Mode and Effects Analysis (FMEA) is a team-oriented, systematic. qualitative, analytical method intended to:

  • evaluate the potential technical risks of failure of’a product or process
  • analyze the causes and effects of those failures
  • document preventive and detection actions
  • recommend actions to reduce risk

Manufacturers consider different types of risk including technical risks. financial risks, time risks and strategy risks. The FMEA is used for analyzing the technical risks to reduce failures and improve safety in the products and processes.

The objective of FMEA is to identify the functions of a product or steps of a process and the associated potential failure modes, effects, and causes. Furthermore, it is used to evaluate whether prevention and detection controls already planned are enough, and to recommend additional actions. The FMEA documents and tracks actions that are taken to reduce risk. The FMEA methodology helps engineers prioritize and focus on preventing product and/or process problems from occurring. Business objectives exist that are supported by the FMEA and other activities. such as:

  • Increasing the quality, reliability, manufacturability, serviceability. and safety of automotive products
  • Ensuring the hierarchy. linkage, interface. and cascading and alignment of requirements between components, systems and vehicles are captured
  • Reducing warranty and goodwill costs
  • increasing customer satisfaction in a highly competitive market
  • Proving product and process risk analysis in the case of product liability
  • Reducing late changes in development
  • Maintaining defect-free product launches
  • Targeting communication in internal and external customer and supplier relationships
  • Building up a knowledge base in the company. i.e document lessons—learned
  • Complying with regulations in the registration approval of the components, systems, and vehicles

Limitations of the FMEA include the following:

  • It is qualitative (subjective). not quantitative (measurable)
  • It is a single-point failure analysis not a multi-point failure analysis
  • It relies on the team’ 5 level of knowledge which may or may not predict future performance.
  • It is a summary of the team’ s discussions and decisions, therefore, the quality of the FMEA report is subject to the recording skills of the team which may reflect the discussion points in whole, or in part (it is not a transcript of a meeting)

For quantitative analysis and multi-point failure analysis other methods such as FTA (Fault Tree Analysis) and FMEDA (Failure Modes Effects, and Diagnostic Analysis) are used. These are the methods which can calculate and analyze the relevant metrics (e.g. single-point failure analysis. multi—point faults latent faults) to reach a quantified analysis result.

Integration of FMEA in the Company

FMEA is a multi-disciplined activity affecting the entire product realization process. The implementation of FMEA needs to be well planned to be fully effective. The FMEA method is an integral element of Product Development and Process Development activities. The FMEA can reduce product redevelopment timing and cost. It supports the development of comprehensive specifications, test plans, and Control Plans.

1 Potential Considerations of the FMEA
The competent performance of the FMEA and the implementation of its results are among the responsibilities of companies that design,manufacture, and/or assemble products for the automotive industry. It is critical that the analysis take into consideration the product’s operating conditions during its useful life. particularly with regard to safety risks and foreseeable (but unintentional) misuse. When the FMEA is performed, the following norms are observed:

  • Clear: potential failure modes are described in technically precise, specific terms. enabling a specialist to assess failure causes and possible effects. Descriptions are free from possible misunderstanding. Emotion-laden terms, (e.g. dangerous, intolerable, irresponsible, etc.) are not appropriate.
  • True: the consequences of potential failures are described accurately (e.g.. potential for odor, smoke, fire, etc.).
  • Realistic: failure causes are reasonable. Extreme events are not considered (e.g.. falling rock on road, no power to manufacturing plant. etc.). Failures resulting from misuse relative to perception, judgement, or action are considered foreseeable when documented by systematic methods (including brainstorming, expert judgement, field reports. use case analysis, etc.). Failures resulting from intentional misuse (e.g. deliberate manipulation and sabotage) are not considered.
  • Complete: foreseeable potential failures are not concealed. Concern about revealing too much know-how by creating a correct and competent FMEA is not a valid reason for an incomplete FMEA. Completeness refers to the entirety of the product/process under analysis (e.g., system elements and functions). However. the depth of detail depends on the risks involved.

Technical risks of failure identified in the FMEA are either assessed as acceptable. or actions are assigned to further reduce risk. The closure status of actions to reduce the risk is documented.

2 Senior Management Commitment

The FMEA process can take considerable time to complete. A commitment of the required resources is vital. Active participation of the product and process owners and commitment from senior management are important to successful FMEA development. Senior management carries the responsibility for the application of FMEA. Ultimately. senior management is responsible for acceptance of the risks and risk minimization actions identified in the FMEA.

3 Know-How Protection of the Design FMEA/ Process FMEA

The sharing of intellectual property found in the Design FMEA and for Process FMEA between suppliers and customers is governed by contractual agreements between suppliers and customers .

4 Agreements between Customers and Suppliers
The Customer Specific Requirements regarding FMEA should be coordinated with the parties involved and for the suppliers. An agreement made about the execution of FMEAs may include but is not limited to items such as system boundaries, necessary work documents, analysis methods, and evaluation tables.

5 Foundation and Family FMEAs
Foundation and family FMEAs are recommended to be created and used as a basis for new analyses. These optional practices provide the greatest opportunity to leverage past experience and knowledge and ensure that knowledge is accumulated over product lifecycles and that prior performance issues are not repeated (lessons learned). Furthermore. such reuse also reduces effort and expenditures. Foundation FMEAs (also known as generic. baseline, template, core, master, or best practice FMEAs. etc.) are FMEAs that contain knowledge of the organization from prior developments which make them useful as a starting point for new FMEAs. The foundation FMEA is not program specific, therefore the generalization of requirements, functions, and measures is allowed. Family FMEAs are specialized foundation FMEAs. It is common to develop products that generally contain common or consistent product boundaries and related functions (a Product Family) or processes which contain a series of operations that produce multiple products or part numbers. In these cases, it is appropriate to develop Family FMEAs which cover the commonalities for these Families. When using the family or foundation FMEA approach for the new product or process under development, the team should identify and focus the analysis on the differences between the existing and the new product, process or application. The information and. ratings carried over from the family or foundation are to be criticality- examined. with regard to the respective use case, and experiences from the known application.

FMEA for Products and Processes

There are three basic cases fer which the FMEA is to be applied, each with a different scope or focus.
Case 1: New designs. new technology, or new process. The scope of the FMEA is the complete design. technology. or process.

Case 2: New application of existing design or process. The scope of the FMEA is an existing design or process in a new environment, location, application, or usage profile (including duty cycle, regulatory requirements, etc.). The scope of the FMEA should focus on the impact of the new environment, location, or application usage on the existing design or process.
Case 3: Engineering changes to an existing design or process. New technical developments, new requirements, product recalls, and reports of failures in the field may drive the need for design and/or process changes. In these cases, a review or revision of the FMEA may be necessary.
The FMEA contains a collection of knowledge about a design or process and may be revised after start of production if at least, one of the following points applies:

  • Changes to designs or processes
  • Changes to the operating conditions
  • Changed requirements (e.g.. law. norms. customer. state of the art)
  • Quality Issues, ( e.g.. Plant experience. zero mileage, or field issues. internal / external complaints)
  • Changes to the Hazard Analysis and Risk Assessment (HARA)
  • Changes to the Threat Analysis and Risk Assessment (TARA)
  • Findings due to product monitoring
  • Lessons learned
    There are two main approaches to FMEA: the analysis according to product functions (Design FMEA) or according to process steps (Process FMEA).

1 Design FMEA
A Design FMEA (DFMEA) is an analytical technique utilized primarily by a design responsible engineer/team as a means to assure that, to the extent possible. potential Failure Modes and their associated Causes or mechanisms of failure have been considered and addressed prior to releasing the part to production. The Design FMEA analyzes the functions of a system, subsystem, or component of interest as defined by the boundary shown on the Block/Boundary Diagram, the relationship between its underlying elements, and to external elements outside the system boundary. This enables the identification of possible design weaknesses to minimize potential risks of failure. A System DFMEA is comprised of various subsystems and components which are represented as system elements (items). System and subsystem analyses are dependent on the viewpoint or responsibility. Systems provide functions at the vehicle level. These functions cascade through subsystems and components. For purpose of analysis, a sub-system is considered the same way as a system. Interfaces and interactions among systems, subsystems, the environment and the customers (e.g. Tier N, OEM, and end user) may be analyzed in System FMEAs. Within a system there may be software, electronic, and mechanical elements. Examples of systems include: Vehicle, Transmission System, Steering System, Brake System or Electronic Stability Control System, etc. A component DFMEA is a subset of a system or subsystem DFMEA. For example. an Electrical Motor is a component of the Window Lifter, which is a subsystem of Window Lifter System. A Housing for the Electrical Motor may also be a component or part. For this reason, the terms “system element” or “item” are used regardless of the level of analysis. Design FMEA may also be used to assess the risks of failure of non-automotive products such as machines, and tooling. The actions resulting from the analysis may be used to recommend design changes, additional testing, and other actions which reduce the risk of failure or increase the ability of a test to detect failures prior to delivery of the design for production.

2 Process FMEA
In contrast to the Design FMEA (DFMEA). which analyzes the failure possibilities that may be created during the design phase of the product, the Process FMEA (PFMEA) analyzes the potential failures of manufacturing, assembly and logistical processes to produce products which conform to design intent. Process-related failures are different than the failures analyzed in the Design FMEA. The Process FMEA analyzes processes by considering the potential failure modes which may result from process variation, to establish priority of actions for prevention, and as needed, improve controls. The overall purpose is to analyze processes and take action prior to production start. to avoid unwanted defects related to manufacturing and assembly and the consequences of those defects.

3 Collaboration between FMEAs
There are opportunities for collaboration between both Design and Process FMEAs in the same company and outside of the company. To help communicate effects and severity, a joined and agreed to severity evaluation can be reviewed between organizations (different companies in the supply chain starting with Tier 1. Tier 2. Tier 3. etc.)

A good starting point for a manufacturer is to make sure the severity in the DFMEA and PFMEA are the same when the failure effects are the same. If the “product“ failure effects to the end user
(vehicle-level) are not included in the PFMEA then the correlation between the DFMEA and PFMEA is not possible. A correlation needs to be made so that a failure of a feature in design that leads to a certain failure effect is also captured in the PFMEA for the same feature (product characteristic).

Project Planning
The Five T’s are five topics that should be discussed at the beginning of a DFMEA or PFMEA. In order to achieve the best results on time and avoid FMEA rework. These topics can be used as part of a project kick- off.

  • FMEA lnTent – Why are we doing FMEA?
  • FMEA Timing- When is this due?
  • FMEA Team – Who needs to be on the team?
  • FMEA Task – What work needs to be done?
  • FMEA Tools- How do we conduct the analysis?

1 FMEA lnTent

It is recommended that members of the FMEA team are competent in the method, based on their role on the team. When team members understand the purpose and intent of FMEA. they will be more prepared to contribute to the goals and objectives of the project.

2 FMEA Timing

FMEA is meant to be a “before-the-event” action. not an “after-the- fact” exercise. To achieve the greatest value, the FMEA is conducted before the implementation of a product or process in which the failure mode potential exists. One of the most important factors for the successful implementation of an FMEA program is timeliness. Up-front time spent properly completing an FMEA, when product/process changes can be most easily and inexpensively implemented, will minimize late change crises. The FMEA as a method for system analysis and failure prevention is best initiated at an early stage of the product development process. It is used to evaluate the risks, valid at that time, in order to initiate actions to minimize them. In addition. the FMEA can support the compilation of requirements. The. FMEA should be carried out according to the project plan and evaluated at the project milestones according to the state of the analysis, it is recommended that a company defines the desired maturity levels for their FMEAs according to overall company-specific development project milestones.

Advanced Product Quality Planning (APQP) PhasesPlan and Define programProduct Design and Development  VerificationProcess Design and Development VerificationProduct and Production ValidationFeedback Assessment and Corrective action
DFMEAStart FMEA planning in concept phase before product development begins information flow from DFMEA and PFMEA The DFMEA and PFMEA should be executed during the same time period to allow optimization of both the product and process design  Start DFMEA when the design concept is well understoodComplete DFMEA analysis prior to release of design specification for QuotationComplete DFMEA action prior to start of production ToolingStart again with DFMEA and PFMEA planning if there are changes to an existing Design or process
PFMEAStart PFMEA when the production concept is well understoodComplete PFMEA analysis prior to final Process decisionComplete PFMEA action prior to PPAP/PPA
FMEA Timing Aligned with APQP Phases
VDA maturity level Assurance level for new partsML0ML1ML2ML3ML4ML5ML6ML7
Innovation Approval for serial DevelopmentRequirement Management for procurement ExtensiveDefinition of the supply chain and placing of ExtensiveApproval of Technical SpecificationProduction planning doneSerial tools, spare parts and Serial Machines availableProduct and Process approvalProject End, Responsibilities transfer to serial production, start ,Requalification
DFMEA Start FMEA planning in concept phase before product development begins information flow from DFMEA and PFMEA The DFMEA and PFMEA should be executed during the same time period to allow optimization of both the product and process design  Start DFMEA when the design concept is well understoodComplete DFMEA analysis prior to release of design specification for Quotation Complete DFMEA action prior to start of production Tooling Start again with DFMEA and PFMEA planning if there are changes to an existing Design or process
PFMEA Start PFMEA when the production concept is well understood Complete PFMEA analysis prior to final Process decision Complete PFMEA action prior to PPAP/PPA
FMEA Timing Aligned to MLA Phases

NOTE: Exceptions to this FMEA timing include non-traditional development flows such as where development of a “standard” process precedes the development of products that will be manufactured using the process.

3 FMEA Team

The FMEA team consists of multi-disciplinary (cross-functional) members who encompass the necessary subject matter knowledge. This should include facilitation expertise and knowledge of the FMEA process. The success of the FMEA depends on active participation of the cross-functional team as necessary to focus on the topics of discussion.

The Design FMEA Team
The Core Team may consist of the following people:-

  • facilitator
  • design engineer
  • system engineer .
  • component engineers
  • test engineer
  • quality/reliability engineer
  • others responsible for the development of the product

The Core Team members prepare the FMEA System Analysis. (Steps 1-3) and participate in the FMEA meetings. The Extended Team may participate on demand (coordinated by the FMEA facilitator or meeting organizer). The Extended Team may consist of the following people:

  • technical experts
  • process/manufacturing engineer
  • service engineer
  • project manager
  • functional safety engineer
  • purchasing
  • supplier
  • customer representative
  • others that may have specialized knowledge which will help the core team analyze specific aspects of the product

The Process FMEA Team
The Core Team may consist of the following people:

  • facilitator
  • prccess/manufacturing engineer
  • ergonomic engineer
  • process validation engineer
  • quality/reliability engineer
  • others responsible for the development of the process

The Core Team members prepare the FMEA System Analysis (Steps 1 – 3) and participate in the FMEA meetings. The Extended Team may participate on demand (coordinated by the FMEA facilitator or meeting organizer). The Extended Team may consist of the following people:

  • design engineer
  • technical experts
  • service engineer
  • project manager
  • maintenance staff
  • line worker
  • purchasing
  • supplier
  • others (as necessary)

FMEA Team Roles and Responsibilities
Within the organization’s product development process. the following roles and responsibilities for FMEA participation should be assigned. Responsibilities of a given role can be shared amongst different persons and/or multiple roles may be assigned to the same person.
a) Management, (Project Manager)

  • Authority to make decisions about the acceptability of identified risks and the execution of actions
  • Defines the persons responsible for pre-work activities, FMEA facilitation, and the design/process engineer responsible for implementation of actions resulting from the analysis
  • Responsible for selecting and applying resources and ensuring an effective risk management process is implemented within scheduled project timing
  • Responsibility and ownership for development and maintenance of the FMEAs.
  • Management responsibility also includes providing direct support to the team(s) through on-going reviews and eliminating roadblocks.
  • Responsible for budget.

b) Lead Design/Process Engineer (Technical Lead)

  • Technical responsibility for the FMEA contents
  • Preparation of the Business Case for technical and/or financial decisions
  • Definition of elements, functions, requirements, and interfaces
  • Focusing on the topics
  • Procurement of the necessary documents and information
  • Incorporating lessons learned

c) FMEA Facilitator

  • Coordination and organization of the workflows in the FMEA
  • Mitigation of conflicts
  • Participation in the team formation
  • Participation in the Preparation of the rough schedule
  • Participation in the invitation to the 1st team meeting for the analysis phase
  • Participation in the Preparation of the decision guidelines/criteria
  • Development of Corporate or Product Line Examples for Rating Tables (Optional) with support from Design/Process Engineer
  • Method competence (FMEA) and familiarization of participants in the FMEA method
  • FMEA Software documentation competence (as necessary)
  • Social skills. able to work in a team
  • Competent moderator, ability to convince, organization and, presentation skills
  • Managing execution of the 7 steps of FMEA method
  • If necessary, Preparation or wrap-up of FMEA meetings
  • Moderation of the FMEA workgroup
  • NOTE: Any team member with the relevant competence and training may fulfill the role of facilitator,

d Core Team Members.

  • Contribute knowledge from relevant product and process experience
  • Contribute necessary information about the product or process that is the focus of the FMEA
  • Contribution of existing experiences from previous FMEAs already known
  • Participation in the execution of the 7 steps of FMEA
  • Involvement in the Preparation of the Business Case
  • Incorporating lessons learned

e Extended Team Members/Experts

  • Contribution of additional information about special topics
  • Contribution of necessary information about the product or process that is the focus of the FMEA
  • Involvement in the Preparation of the Business Case

4 FMEA Task
The 7-Step Overview provides the framework for the tasks and deliverables of the FMEA. In addition, the FMEA team should be prepared to review the results of their analysis with management and the customer. upon request.The FMEA may also be audited by an internal auditor, customer auditor, or third-party registrar to ensure each task has been fulfilled.

5 FMEA Tools

There are numerous FMEA Tools, i.e.. software packages that an be used to develop a DFMEA and PFMEA as well as follow up on actions. This software ranges from dedicated FMEA software to standard spreadsheets customized to develop the FlEA. Companies may develop their own in-house database solution or purchase commercial software. In any case. the FMEA team needs to have knowledge of how to use the FMEA tool selected for their project as required by the company. The Software View depicts what the user sees when developing a FMEA using specialized software that utilizes system element structure, function net, failure net, etc. The Form View depicts what the user sees when developing a FMEA in a spreadsheet.

FMEA METHODOLOGY

The FMEA process is carried cut in seven steps. These seven steps provide a systematic approach to perforrn a Failure Made and Effects Analysis and serve as a record of the The FMEA process is carried cut in seven steps. technical risk analysis.

AIAG- Advance Product Quality Planning and Control Plan (APQP)

Product Quality Planning is a structured method of defining and establishing the steps necessary to assure that a product satisfies the customer. The goal of product quality planning is to facilitate communication with everyone involved to assure that all required steps are completed on time. Effective product quality planning depends on a company’s top management commitment to the effort required in achieving customer satisfaction. Some of the benefits of product quality planning are:

  • To direct resources to satisfy the customer.
  • To promote early identification of required changes.
  • To avoid late changes.
  • To provide a quality product on time at the lowest cost.

The work practices, tools, and analytical techniques described in this manual are listed in a logical sequence to make it easy to follow. Each Product Quality Plan is unique. The actual timing and sequence of execution is dependent on customer needs and expectations and/or other practical matters. The earlier a work practice, tool, and/or analytical technique can be implemented in the Product Quality Planning Cycle, the better.

Organize the Team:The organization’s first step in product quality planning is to assign a process owner for the APQP project. In addition, a cross functional team should be established to assure effective product quality planning. The team should include representatives from multiple functions such as engineering, manufacturing, material control, purchasing, quality, human resources, sales, field service, suppliers, and customers, as appropriate.

Define the Scope: It is important for the organization’s product quality planning team in the earliest stage of the product program to identify customer needs, expectations, and requirements. At a minimum, the team must meet to:

  • Select a project team leader responsible for overseeing the planning process. (In some cases it may be advantageous to rotate the team leader during the planning cycle.)
  • Define the roles and responsibilities of each area represented.
  • Identify the customers – internal and external.
  • Define customer requirements. (Use QFD if applicable.)
  • Select the disciplines, individuals, and/or suppliers that must be added to the team, and those not required
  • Understand customer expectations, i.e., design, number of tests.
  • Assess   the    feasibility   of   the    proposed   design,   performance requirements and manufacturing process.
  • Identify costs, timing, and constraints that must be considered.Determine assistance required from the customer.
  • Identify documentation process or method.

Team-to-Team: The organization’s product quality planning team must establish lines of communication with other customer and organization teams. This may include regular meetings with other teams. The extent of team-to-team contact is dependent upon the number of issues requiring resolution.

Training: The success of a Product Quality Plan is dependent upon an effective training program that communicates all the requirements and development skills to fulfill customer needs and expectations.

Customer and Organization Involvement: The primary customer may initiate the quality planning process with an organization. However, the organization has an obligation to establish a cross functional team to manage the product quality planning process. Organizations must expect the same performance from their suppliers.

Simultaneous Engineering: Simultaneous Engineering is a process where cross functional teams strive for a common goal. It replaces the sequential series of phases where results are transmitted to the next area for execution. The purpose is to expedite the introduction of quality products sooner. The organization’s product quality planning team assures that other areas/teams plan and execute activities that support the common goal or goals

Control Plans: Control plans are written descriptions of the systems for controlling parts and processes. Separate control plans cover three distinct phases:

  • Prototype – A description of the dimensional measurements and material and performance tests that will occur during Prototype build
  • Pre-launch – A description of the dimensional measurements and material and performance tests that will occur after Prototype and before full Production.
  • Production – A comprehensive documentation of product/process characteristics, process controls, tests, and measurement systems that will occur during mass production.

Concern Resolution:  During the planning process, the team will encounter product design and/or processing concerns. These concerns should be documented on a matrix with assigned responsibility and timing. Disciplined problem- solving methods are recommended in difficult situations. Analytical techniques described in Appendix B should be used as appropriate.

Product Quality Timing Plan: The organization’s product quality planning team’s first order of business following organizational activities should be the development of a Timing Plan. The type of product, complexity and customer expectations should be considered in selecting the timing elements that must be planned and charted. All team members should agree with each event, action, and timing.   A well-organized timing chart should list tasks, assignments, and/or other events. (The Critical Path Method may be appropriate;) Also, the chart provides the planning team with a consistent format for tracking progress and setting meeting agendas. To facilitate status reporting, each event must have a “start” and a “completion” date with the actual point of progress recorded. Effective status reporting supports program monitoring with a focus on identifying items that require special attention.  

Plans Relative to the Timing Chart: The success of any program depends on meeting customer needs and expectations in a timely manner at a cost that represents value. The Product Quality Planning Timing Chart below and the Product Quality Planning Cycle described previously require a planning team to concentrate its efforts on problem prevention. Problem prevention is driven by Simultaneous Engineering performed by product and manufacturing engineering activities working concurrently. Planning teams must be prepared to modify product quality plans to meet customer expectations. The organization’s product quality planning team is responsible for assuring that timing meets or exceeds the customer timing plan.

1.0 Plan and Define Program

IT describes how customer needs and expectations are linked to planning and defining a quality program. The goal of any product program is meeting customer needs while providing competitive value. The initial step of the product quality planning process is to ensure that customer needs and expectations are clearly understood.The inputs and outputs applicable to the planning process may vary according to the product development process, and customer needs and expectations.

INPUTS

  • Voice of the Customer
    • Market Research (including OEM Vehicle Build Timing and OEM Volume Expectations)
    • Historical Warranty and Quality Information
    • Team Experience
  • Business Plan/Marketing Strategy
  • Product/Process Benchmark Data
  • Product/Process Assumptions
  • Product Reliability Studies
  • Customer Inputs

OUTPUTS

  • Design Goals
  • Reliability and Quality Goals
  • Preliminary Bill of Material
  • Preliminary Process Flow Chart
  • Preliminary Listing of Special Product and Process Characteristics
  • Product Assurance Plan
  • Management Support (including program timing and planning for resources and staffing to support required capacity)

1.1 Voice of the Customer
The “Voice of the Customer” encompasses complaints, recommendations, data and information obtained from internal and/or external customers. Some methods for gathering this information are as follows.

1.1.1 Market Research

The organization’s product quality planning team may need to obtain market research data and information reflecting the Voice of the Customer. The following sources can assist in identifying customer concerns and wants and translating those concerns into product and process characteristics:

  • Customer interviews
  • Customer questionnaires and surveys
  • Market test and positioning reports
  • New product quality and reliability studies
  • Competitive product quality studies
  • Best Practices
  • Lessons Learned

1.1.2 Historical Warranty and Quality Information

A list of historical customer concerns and wants should be prepared to assess the potential for recurrence during the design, manufacture, installation and use of the product. These should be considered as an extension of the other design requirements and included in the analysis of customer needs. Many of the following items can assist the team in identifying customer concerns and wants and prioritizing appropriate resolutions.

  • Best Practices
  • Lessons Learned
  • Warranty reports
  • Capability indicators
  • Supplier plant internal quality reports
  • Problem resolution reports
  • Customer plant returns and rejections
  • Field return product analysis

1.1.3 Team Experience

The team may use any source of any information as appropriate, including the following:

  • Input from higher system level or past Quality Function Deployment (QFD) projects
  • Media commentary and analysis: magazine and newspaper reports, etc
  • Customer letters and suggestions
  • Best Practices
  • Lessons Learned
  • Dealer comments
  • Fleet Operator’s comments
  • Field service reports
  • Internal evaluations using surrogate customers
  • Road trips
  • Management comments or direction
  • Problems and issues reported from internal customers
  • Government requirements and regulations
  • Contract review

1.2 Business Plan and Marketing Strategy

The customer business plan and marketing strategy will set the framework for the product quality plan. The business plan may place constraints (e.g., timing, cost, investment, product positioning, research and development (R&D) resources) on the team that affect the direction taken. The marketing strategy will define the target customer, the key sales points, and key competitors.

1.3 Product/Process Benchmark Data

The use of bench-marking will provide input to establishing product/process performance targets. Research and development may also provide benchmarks and concept ideas. One method to successful bench-marking is:

  • Identify the appropriate benchmarks.
  • Understand the reason for the gap between your current status and the benchmark.
  • Develop a plan to close the gap, match the benchmark, or exceed the benchmark.

1.4 Product/Process Assumptions

  There will be assumptions that the product has certain features, design, or process concepts. These include technical innovations, advanced materials, reliability assessments, and new technology. All should be utilized as inputs.

1.5 Product Reliability Studies

This type of data considers frequency of repair or replacement of components within designated periods of time and the results of long-term reliability/durability tests. 

1.6 Customer Inputs

 The next users of the product can provide valuable information relating to their needs and expectations. In addition, the next product users may have already conducted some or all of the aforementioned reviews and studies. These inputs should be used by the customer and/or organization to develop agreed upon measures of customer satisfaction.

1.7 Design Goals

 Design goals are a translation of the Voice of the Customer into measurable design objectives. The proper selection of design goals assures that the Voice of the Customer is not lost in subsequent design activity. The Voice of the Customer also includes regulatory requirements such as materials composition reporting and polymeric part marking.

1.8 Reliability and Quality Goals

Reliability goals are established based on customer wants and expectations, program objectives, and reliability benchmarks. An example of customer wants and expectations could include no safety failures. Some reliability benchmarks could be competitor product reliability, warranty data, or frequency of repair over a set time period. Quality goals should be based on metrics such as parts per million, problem levels, or scrap reduction.  

1.9 Preliminary Bill of Material

The team should establish a preliminary bill of material based on product/process assumptions and include a potential supplier list.   In order to identify the preliminary special product/process characteristics it is necessary to have selected the appropriate design and manufacturing process.

1.10 Preliminary Process Flow Chart
The anticipated manufacturing process should be described using a process flow chart developed from the preliminary bill of material and product/process assumptions.

1.11 Preliminary Identification of Special Product and Process Characteristics
Special product and process characteristics are identified by the customer in addition to those selected by the organization through knowledge of the product and process. Examples of input to identification of special characteristics include:

  • Product assumptions based on the analysis of customer needs and expectations.
  • Identification of reliability goals and requirements.
  • Identification of special process characteristics from the anticipated manufacturing process.
  • Similar part FMEAs.

1.12 Product Assurance Plan
The Product Assurance Plan translates design goals into design requirements and is based on customer needs and expectations. This manual does not require a specific method for preparing a Product Assurance Plan. The Product Assurance Plan can be developed in any format understood by the organization and should include:

  • Outlining of program requirements.
  • Identification of reliability, durability, and apportionment/allocation goals and/or requirements.
  • Assessment of new technology, complexity, materials, application, environment, packaging, service, and manufacturing requirements, or any other factor that may place the program at risk.
  • Use of Failure Mode and Effects Analysis (FMEA).
  • Development of preliminary engineering requirements.

1.13 Management Support

One of the keys to the success of Advanced Product Quality Planning is the interest, commitment and support of upper management. Participation by management in product quality planning meetings is vital to ensuring the success of the program. Management should be updated at the conclusion of every product quality planning phase to reinforce their commitment and support. Updates and/or requests for assistance can occur more frequently as required. A primary goal of Advanced Product Quality Planning is to maintain management support by demonstrating that all planning requirements have been met and/or concerns documented and scheduled for resolution, including program timing and planning for resources and staffing to support required capacity

2.0 Product Design and Development

It discusses the elements of the planning process during which design features and characteristics are developed into a near final form. All design factors should be considered by the organization in the Advanced Product Quality Planning process even if the design is owned by the customer or shared. The steps include prototype build to verify that the product or service meets the objectives of the Voice of the Customer. A feasible design must permit meeting production volumes and schedules, and be consistent with the ability to meet engineering requirements, along with quality, reliability, investment cost, weight, unit cost and timing objectives. Although feasibility studies and control plans are primarily based on engineering drawings and specification requirements, valuable information can be derived from the analytical tools described in this chapter to further define and prioritize the characteristics that may need special product and process controls.In this chapter, the Product Quality Planning Process is designed to assure a comprehensive and critical review of engineering requirements and other related technical information. At this stage of the process, a preliminary feasibility analysis will be made to assess the potential problems that could occur during manufacturing.

DESIGN INPUTS

  • Design Goals
  • Reliability and Quality Goals
  • Preliminary Bill of Material
  • Preliminary Process Flow Chart
  • Preliminary Listing of Special Product and Process Characteristics
  • Product Assurance Plan
  • Management Support

DESIGN OUTPUTS

  • Design Failure Mode and Effects Analysis (DFMEA)
  • Design for Manufacturability and Assembly
  • Design Verification
  • Design Reviews
  • Prototype Build – Control Plan
  • Engineering Drawings (Including Math Data)
  • Engineering Specifications
  • Material Specifications
  • Drawing and Specification Changes

APQP OUTPUTS

  • New Equipment, Tooling and Facilities Requirements
  • Special Product and Process Characteristics
  • Gages/Testing Equipment Requirements
  • Team Feasibility Commitment and Management Support

2.1 Design Failure Mode and Effects Analysis (DFMEA)

The DFMEA is a disciplined analytical technique that assesses the probability of failure as well as the effect of such failure. A DFMEA is a living document continually updated as customer needs and expectations require. The DFMEA is an important input to the APQP process that may include previously selected product and process characteristics. The Design FMEA Checklist should also be reviewed to assure that the appropriate design characteristics have been considered.

2.2   Design for Manufacturability and Assembly

   Design for Manufacturability and Assembly is a Simultaneous Engineering process designed to optimize the relationship between design function, manufacturability, and ease of assembly. The scope of customer needs and expectations defined will determine the extent of the organization’s product quality planning team involvement in this activity. This manual does not include or refer to a formal method of preparing a Design for Manufacturability and Assembly Plan. At a minimum, the items listed here should be considered by the organization’s product quality planning team:

  • Design, concept, function, and sensitivity to manufacturing variation
  • Manufacturing and/or assembly process
  • Dimensional tolerances
  • Performance requirements
  • Number of components
  • Process adjustments
  • Material handling

The above list may be augmented based on the organization’s product quality planning team’s knowledge, experience, the product/process, government regulations, and service requirements. 

2.3 Design Verification

 Design verification verifies that the product design meets the customer requirements derived from the activities.

2.4 Design Reviews

 Design reviews are regularly scheduled meetings led by the organization’s design engineering activity and must include other affected areas. The design review is an effective method to prevent problems and misunderstandings; it also provides a mechanism to monitor progress, report to management, and obtain customer approval as required.Design reviews are a series of verification activities that are more than an engineering inspection. At a minimum, design reviews should include evaluation of:

  • Design/Functional requirement(s) considerations
  • Formal reliability and confidence goals
  • Component/subsystem/system duty cycles
  • Computer simulation and bench test results
  • DFMEA(s)
  • Review of the Design for Manufacturability and Assembly effort
  • Design of Experiments (DOE) and assembly build variation results
  • Test failures
  • Design verification progress

A major function of design reviews is the tracking of design verification progress. The organization should track design verification progress through the use of a plan and report format, referred to as Design Verification Plan and Report (DVP&R) by some customers. The plan and report is a formal method to assure:

  • Design verification
  • Product and process validation of components and assemblies through the application of a comprehensive test plan and report.

The organization’s product quality planning team is not limited to the items listed. The team should consider and use as appropriate, the analytical techniques.

2.5 Prototype Build – Control Plan

Prototype control plans are a description of the dimensional measurements and material and functional tests that will occur during prototype build. The organization’s product quality planning team should ensure that a prototype control plan is prepared. A Control Plan Checklist is provided to assist in the preparation of the prototype control plan.The manufacture of prototype parts provides an excellent opportunity for the team and the customer to evaluate how well the product or service meets the Voice of the Customer objectives. It is the organization’s product quality planning team’s responsibility to review prototypes for the following:

  • Assure that the product or service meets specification and report data as required.
  • Ensure that particular attention has been given to special product and process characteristics.
  • Use data and experience to establish preliminary process parameters and packaging requirements.
  • Communicate any concerns, deviations, and/or cost impact to the customer.

2.6 Engineering Drawings (Including Math Data)

Customer designs do not preclude the organization’s product quality planning team’s responsibility to review engineering drawings in the following manner. Engineering drawings may include special (governmental regulatory and safety) characteristics that must be shown on the control plan. When customer engineering drawings are nonexistent, the controlling drawings should be reviewed by the team to determine which characteristics affect fit, function, durability and/or governmental regulatory safety requirements.Drawings should be reviewed to determine if there is sufficient information for a dimensional layout of the individual parts. Control or datum surfaces/locators should be clearly identified so that appropriate functional gages and equipment can be designed for ongoing controls. Dimensions should be evaluated to assure feasibility and compatibility with industry manufacturing and measuring standards. If appropriate, the team should assure that math data is compatible with the customer’s system for effective two-way communications.

2.7 Engineering Specifications

A detailed review and understanding of the controlling specifications will help the organization’s product quality planning team to identify the functional, durability and appearance requirements of the subject component or assembly. Sample size, frequency, and acceptance criteria of these parameters are generally defined in the in-process test section of the Engineering Specification. Otherwise, the sample size and frequency are to be determined by the organization and listed in the control plan. In either case, the organization should determine which characteristics affect meeting functional, durability, and appearance requirements.  

2.8 Material Specifications

In addition to drawings and performance specifications, material specifications should be reviewed for special characteristics relating to physical properties, performance, environmental, handling, and storage requirements. These characteristics should also be included in the control plan.  

2.9 Drawing and Specification Changes

Where drawing and specification changes are required, the team must ensure that the changes are promptly communicated and properly documented to all affected areas.  

2.10  New Equipment, Tooling and Facilities Requirements

The DFMEA, Product Assurance Plan and/or design reviews may identify new equipment and facilities including meeting capacity requirements. The organization’s product quality planning team should address these requirements by adding the items to the Timing Chart. The team should assure that there is a process to determine that new equipment and tooling is capable and delivered on time. Facilities progress should be monitored to assure completion prior to planned production tryout.

2.11 Special Product and Process Characteristics

In the Plan and Define Program stage, the team identified preliminary special product and process characteristics. The organization’s product quality planning team should build on this listing and reach consensus through the evaluation of the technical information. The organization should refer to the appropriate customer-specific requirements

2.12  Gages/Testing Equipment Requirements

Gages/testing equipment requirements may also be identified at this time. The organization’s product quality planning team should add these requirements to the Timing Chart. Progress should be monitored to assure that required timing is met. 

2.13 Team Feasibility Commitment and Management Support

The organization’s product quality planning team must assess the feasibility of the proposed design at this time. Customer design ownership does not preclude the organization’s obligation to assess design feasibility. The team must be satisfied that the proposed design can be manufactured, assembled, tested, packaged, and delivered in sufficient quantity on schedule at an acceptable cost to the customer. The Design Information Checklist allows the team to review its efforts in this section and make an evaluation of effectiveness. This checklist will also serve as a basis for the open issues discussed in the Team Feasibility Commitment. The team consensus that the proposed design is feasible should be documented along with all open issues that require resolution and presented to management for their support.

3.0 Process Design and Development

It discusses the major features of developing a manufacturing system and its related control plans to achieve quality products.   The tasks to be accomplished at this step of the product quality planning process depend upon the successful completion of the prior stages contained in the first two sections. This next step is designed to ensure the comprehensive development of an effective manufacturing system. The manufacturing system must assure that customer requirements, needs and expectations are met. The inputs and outputs applicable to the process step in this chapter are as follows:

INPUTS

  • Design Failure Mode and Effects Analysis (DFMEA)
  • Design for Manufacturability and Assembly
  • Design Verification
  • Design Reviews
  • Prototype Build – Control Plan
  • Engineering Drawings (Including Math Data)
  • Engineering Specifications
  • Material Specifications
  • Drawing and Specification Changes
  • New Equipment, Tooling and Facilities Requirements
  • Special Product and Process Characteristics
  • Gages/Testing Equipment Requirements
  • Team Feasibility Commitment and Management Support

OUTPUTS

  • Packaging Standards & Specifications
  • Product/Process Quality System Review
  • Process Flow Chart
  • Floor Plan Layout
  • Characteristics Matrix
  • Process Failure Mode and Effects Analysis (PFMEA)
  • Pre-Launch Control Plan (including Error-Proofing Devices)
  • Process Instructions
  • Measurement Systems Analysis Plan
  • Preliminary Process Capability Study Plan
  • Management Support (including operator staffing and training plan)

3.1  Packaging Standards and Specifications

The customer will usually have packaging requirements that should be incorporated into any packaging specifications for the product. If none are provided, the packaging design should ensure product integrity at point of use. The organization’s product quality planning team should ensure that individual product packaging (including interior partitions) is designed and developed. Customer packaging standards or generic packaging requirements should be used when appropriate. In all cases the packaging design should assure that the product performance and characteristics will remain unchanged during packing, transit, and unpacking. The packaging should have compatibility with all identified material handling equipment including robots. 

3.2 Product/Process Quality System Review

The organization’s product quality planning team should review the manufacturing site(s) Quality Management System. Any additional controls and/or procedural changes required to produce the product should be updated, documented and included in the manufacturing control plan. This is an opportunity for the organization’s product quality planning team to improve the existing quality system based on customer input, team expertise, and previous experience. The Product/Process Quality Checklist can be used by the organization’s product quality planning team to verify completeness. 

3.3  Process Flow Chart

The process flow chart is a schematic representation of the current or proposed process flow. It can be used to analyze sources of variations of machines, materials, methods, and manpower from the beginning to end of a manufacturing or assembly process. It is used to emphasize the impact of sources of variation on the process. The flow chart helps to analyze the total process rather than individual steps in the process.   The flow chart assists the organization’s product quality planning team to focus on the process when conducting the PFMEA and designing the Control Plan. The Process Flow Chart Checklist can be used by the organization’s product quality planning team to verify completeness. 

3.4   Floor Plan Layout

 The floor plan should be developed and reviewed to determine the acceptability of important control items, such as inspection points, control chart location, applicability of visual aids, interim repair stations, and storage areas to contain non-conforming material. All material flow should be keyed to the process flow chart and control plan. The Floor Plan Checklist can be used by the organization’s product quality planning team to verify completeness. The floor plan layout should be developed in such a manner to optimize the material travel, handling and value-added use of floor space and should facilitate the synchronous flow of materials through the process.  

3.5 Characteristics Matrix

A characteristics matrix is a recommended analytical technique for displaying the relationship between process parameters and manufacturing stations.

3.6 Process Failure Mode and Effects Analysis (PFMEA)

A PFMEA should be conducted during product quality planning and before beginning production. It is a disciplined review and analysis of a new or revised process and is conducted to anticipate, resolve, or monitor potential process problems for a new or revised product program. The Process FMEA Checklist i can be used by the organization’s product quality planning team to verify completeness. 

3.7  Pre-Launch Control Plan

Pre-launch control plans are a description of the dimensional measurements and material and functional tests that will occur after prototype and before full production. The pre-launch control plan should include additional product/process controls to be implemented until the production process is validated. The purpose of the pre-launch control plan is to contain potential non-conformities during or prior to initial production runs. Examples of enhancements in the pre-launch control plan are:

  • More frequent inspection
  • More in-process and final check points
  • Robust statistical evaluations
  • Enhanced audits
  • Identification of error-proofing devices

 The Control Plan can be used by the organization’s product quality planning team to verify completeness.

3.8 Process Instructions

The organization’s product quality planning team should ensure that process instructions provide sufficient understanding and detail for all personnel who have direct responsibility for the operation of the processes. These instructions should be developed from the following sources:

  • FMEAs
  • Control plan(s)
  • Engineering      drawings,      performance     specifications,     material specifications, visual standards and industry standards
  • Process flow chart
  • Floor plan layout
  • Characteristics matrix
  • Packaging Standards and Specifications
  • Process parameters
  • Organization expertise and knowledge of the processes and products
  • Handling requirements
  • Operators of the process

 The process instructions for standard operating procedures should be posted and should include set-up parameters such as: machine speeds, feeds, cycle times, and tooling, and should be accessible to the operators and supervisors. Additional information for process instruction preparation may be found in appropriate customer-specific requirements. 

3.9 Measurement Systems Analysis Plan

The organization’s product quality planning team should ensure that a plan to accomplish the required measurement systems analysis is developed, including checking aids. This plan should include, at a minimum, a laboratory scope appropriate for the required measurements and tests, the responsibility to ensure gage linearity, accuracy, repeatability, reproducibility, and correlation for duplicate gages.

3.10 Preliminary Process Capability Study Plan

The organization’s product quality planning team should ensure the development of a preliminary process capability plan. The characteristics identified in the control plan will serve as the basis for the preliminary process capability study plan. . 

3.11 Management Support

The organization’s product quality planning team should schedule a formal review designed to reinforce management commitment at the conclusion of the process design and development phase. This review is critical to keeping upper management informed as well as gaining assistance to assist in resolution of any open issues. Management support includes the confirmation of the planning and providing the resources and staffing to meet the required capacity.

4.0 Product and Process Validation

IT discusses the major features of validating the manufacturing process through an evaluation of a significant production run. During a significant production run, the organization’s product quality planning team should validate that the control plan and process flow chart are being followed and the products meet customer requirements. Additional concerns should be identified for investigation and resolution prior to regular production runs.The inputs and outputs applicable to the process steps in this chapter are as follows:

INPUTS

  • Packaging Standards & Specifications
  • Product/Process Quality System Review
  • Process Flow Chart
  • Floor Plan Layout
  • Characteristics Matrix
  • Process Failure Mode and Effects Analysis (PFMEA)
  • Pre-Launch Control Plan
  • Process Instructions
  • Measurement Systems Analysis Plan
  • Preliminary Process Capability Study Plan
  • Management Support

OUTPUTS

  • Significant Production Run
  • Measurement Systems Evaluation
  • Preliminary Process Capability Study
  • Production Part Approval
  • Production Validation Testing
  • Packaging Evaluation
  • Production Control Plan
  • Quality Planning Sign-Off and Management Support

4.1 Significant Production Run

The significant production run must be conducted using production tooling, production equipment, production environment (including production operators), facility, production gages and production rate. The validation of the effectiveness of the manufacturing process begins with the significant production run . The minimum quantity for a significant production run is usually set by the customer, but can be exceeded by the organization’s product quality planning team. Output of the significant production run (product) is used for:

  • Preliminary process capability study
  • Measurement systems analysis
  • Production rate demonstration
  • Process review
  • Production validation testing
  • Production part approval
  • Packaging evaluation
  • First time capability (FTC)
  • Quality planning sign-off
  • Sample production parts
  • Master sample (as required)

4.2 Measurement Systems Analysis

The specified monitoring and measuring devices and methods should be used to check the control plan identified characteristics to engineering specification and be subjected to measurement system evaluation during or prior to the significant production run. Refer to the Chrysler, Ford, and General Motors Measurement Systems Analysis (MSA) reference manual. 

4.3 Preliminary Process Capability Study

The preliminary process capability study should be performed on characteristics identified in the control plan. The study provides an assessment of the readiness of the process for production. Refer to customer-specific requirements for unique requirements.

4.4  Production Part Approval

PPAP’s purpose is to provide the evidence that all customer engineering design record and specification requirements are properly understood by the organization and that the manufacturing process has the potential to produce product consistently meeting these requirements during an actual production run at the quoted production rate.

4.5 Production Validation Testing

Production validation testing refers to engineering tests that validate that products made from production tools and processes meet customer engineering standards including appearance requirements.  

4.6 Packaging Evaluation

All test shipments (when required) and test methods must assess the protection of the product from normal transportation damage and adverse environmental factors. Customer-specified packaging does not preclude the organization’s product quality planning team involvement in evaluating the effectiveness of the packaging.  

4.7 Production Control Plan

The production control plan is a written description of the systems for controlling production parts and processes. The production control plan is a living document and should be updated to reflect the addition or deletion of controls based on experience gained by producing parts. (Approval of the authorized customer representative may be required.) The production control plan is a logical extension of the pre-launch control plan. Mass production provides the organization the opportunity to evaluate output, review the control plan and make appropriate changes.

4.8 Quality Planning Sign-Off and Management Support

The organization’s product quality planning team should perform a review at the manufacturing location(s) and coordinate a formal sign-off. The product quality sign-off indicates to management that the appropriate APQP activities have been completed. The sign-off occurs prior to first product shipment and includes a review of the following:

  • Process Flow Charts. Verify that process flow charts exist and are being followed.
  • Control Plans. Verify that control plans exist, are available and are followed at all times for all affected operations.
  • Process Instructions. Verify that these documents contain all the special characteristics specified in the control plan and that all PFMEA recommendations have been addressed. Compare the process instructions, PFMEA and process flow chart to the control plan.
  • Monitoring and Measuring Devices. Where special gages, fixtures, test equipment or devices are required per the control plan, verify gage repeatability and reproducibility (GR&R) and proper usage.
  • Demonstration of Required Capacity. Using production processes, equipment, and personnel.

Upon completion of the sign-off, a review with management should be scheduled to inform management of the program status and gain their support with any open issues.

5.0 Feedback, Assessment and Corrective Action

Quality planning does not end with process validation and installation. It is the component manufacturing stage where output can be evaluated when all special and common causes of variation are present. This is also the time to evaluate the effectiveness of the product quality planning effort. The production control plan is the basis for evaluating product or service at this stage. Variable and attribute data must be evaluated. Organizations that fully implement an effective APQP process will be in a better position to meet customer requirements including any special characteristics specified by the customer.

INPUTS

  • Significant Production Run
  • Measurement Systems Evaluation
  • Preliminary Process Capability Study
  • Production Part Approval
  • Production Validation Testing
  • Packaging Evaluation
  • Production Control Plan
  • Quality Planning Sign-Off and Management Support

OUTPUTS

  • Reduced Variation
  • Improved Customer Satisfaction
  • Improved Delivery and Service
  • Effective use of Lessons Learned/Best Practices

5.1 Reduced Variation

  Control charts and other statistical techniques should be used as tools to identify process variation. Analysis and corrective actions should be used to reduce variation. Continual improvement requires attention not only to the special causes of variation but understanding common causes and seeking ways to reduce these sources of variation. Proposals should be developed including costs, timing, and anticipated improvement for customer review. The reduction or elimination of a common cause may provide the additional benefit of lower costs. Organizations should be using tools such as value analysis and reduction of variation to improve quality and reduce cost.

5.2  Improved Customer Satisfaction

Detailed planning activities and demonstrated process capability of a product or service are important components to customer satisfaction. However, the product or service still has to perform in the customer environment. This product usage stage requires organization participation. In this stage much can be learned by the organization and customer. The effectiveness of the product quality planning efforts can also be evaluated at this stage.The organization and customer become partners in making the changes necessary to correct any deficiencies and to improve customer satisfaction. 

5.3  Improved Delivery and Service

The delivery and service stage of quality planning continues the organization and customer partnership in solving problems and continual improvement. The customer’s replacement parts and service operations must also meet requirements for quality, cost, and delivery.   The goal is first time quality. However, where problems or deficiencies occur in the field it is essential that the organization and customer form an effective partnership to correct the problem and satisfy the end-user customer.The experience gained in this stage provides the customer and organization with the necessary knowledge to reduce process, inventory, and quality costs and to provide the right component or system for the next product. 

5.4 Effective Use of Lessons Learned/Best Practices

  A Lessons Learned or Best Practices portfolio is beneficial for capturing, retaining and applying knowledge. Input to Lessons Learned and Best Practices can be obtained through a variety of methods including:

  • Review of Things Gone Right/Things Gone Wrong (TGR/TGW)
  • Data from warranty and other performance metrics
  • Corrective action plans”Read-across” with similar products and processes
  • DFMEA and PFMEA studies

PRODUCT QUALITY PLANNING CHECKLISTS

The following checklists are provided to assist the organization’s product quality planning team in order to verify that the APQP process is both complete and accurate. These checklists are not intended to fully define or represent all elements of the APQP process. The use of the checklists is one of the last steps of the process and not intended as a “check the box” activity or exercise to circumvent full application of the APQP process. In reviewing the questions in the checklist, where “No” is identified as the appropriate response, the column “Comment/Action Required” is used to identify the action required to close the gap, including the impact on the APQP process. The follow up action should include identification of an individual responsible and schedule. Use the “Person Responsible” and “Due Date” columns.

ANALYTICAL TECHNIQUES

Assembly Build Variation Analysis: An assembly build variation analysis is an analysis that simulates the buildup of an assembly and examines tolerance accumulation, statistical parameters, sensitivity, and “what if” investigation. 

Benchmarking:  Benchmarking is a systematic approach to identifying standards for comparison. It provides input to the establishment of measurable performance targets, as well as ideas for product design and process design. It can also provide ideas for improving business processes and work procedures.Product and process benchmarking should include the identification of world class or best-in-class based on customer and internal objective performance measures and research into how this performance was achieved. Benchmarking should provide a stepping stone for developing new designs and processes that exceed the capabilities of the benchmark companies.

Cause and Effect Diagram: The “cause and effect” diagram is an analytical tool to indicate the relationship between an “effect” and all possible “causes” influencing it. This is sometimes referred to as fishbone diagram, Ishikawa diagram, or feather diagram.

Characteristics Matrix

   A characteristics matrix is a display of the relationship between process parameters and manufacturing stations. A method of developing the characteristics matrix is to number the dimensions and/or features on the part print and each manufacturing operation. All manufacturing operations and stations appear across the top, and the process parameters are listed down the left-hand column. The more manufacturing relationships there are, the more important the control of a characteristic becomes. Regardless of matrix size, the upstream relationships of characteristics are evident. A typical matrix is shown below

CHARACTERISTICS MATRIX

 

(EXAMPLE)

 

      TOLERANCEOPERATION NOS.
DIM.DESCRIPTION05102030
NO.     
1IDXC X
2FACE XCC
3  XLL
4   X 
5   X 
6OD  X 
  • C = Characteristic at an operation used for clamping
  • L = Characteristic at an operation used for locating
  • x = Characteristic created or changed by this operation should match the process flow diagram form

Critical Path Method

  The critical path method can be a Pert or Gantt Chart that shows the chronological sequence of tasks that require the greatest expected time to accomplish. It can provide valuable information as to:

  • Interrelationships
  • Early Forecast of Problems
  • Identification of Responsibility
  • Resource Identification, Allocation and Leveling

Design of Experiments (DOE)

A design experiment is a test or sequence of tests where potential influential process variables are systematically changed according to a prescribed design matrix. The response of interest is evaluated under the various conditions to: (1) identify the influential variables among the ones tested, (2) quantify the effects across the range represented by the levels of the variables, (3) gain a better understanding of the nature of the causal system at work in the process, and (4) compare the effects and interactions. Application early in the product/process development cycle can result in: (1) improved process yields, (2) reduced variability around a nominal or target value, (3) reduced development time, and (4) reduced overall costs.

Design for Manufacturability and Assembly

Design for Manufacturability and Assembly is a Simultaneous Engineering process designed to optimize the relationship between design function, manufacturability, and ease of assembly. The enhancement of designs for assembly and manufacturing is an important step. Plant representatives should be consulted early in the design process to review components or systems and provide input on specific assembly and manufacturing requirements. Specific dimensional tolerances should be determined based on the like process. This will assist in identifying the equipment required and any process changes necessary.

Design Verification Plan and Report (DVP&R)

The Design Verification Plan and Report (DVP&R) is a method to plan and document testing activity through each phase of product/process development from inception to ongoing refinement. An effective DVP&R provides a concise working document that aids engineering personnel in the following areas:

  • Facilitates the development of a logical testing sequence by requiring the responsible areas to thoroughly plan the tests needed to assure that the component or system meets all engineering requirements.
  • Assures product reliability meets customer-driven objectives.
  • Highlights situations where customer timing requires an accelerated test plan.
  • Serves as a working tool for responsible area(s) by:
    • Summarizing functional, durability, and reliability testing requirements and results in one document for ease of reference.
    • Providing the ability to easily prepare test status and progress reports for design reviews.

Mistake Proofing/Error- Proofing

Mistake proofing is a technique to identify errors after they occur. Mistake proofing should be used as a technique to control repetitive tasks or actions and prevent non-conformances from being passed on to the subsequent operation and ultimately the customer. Error-proofing is a technique used to identify potential process errors and either design them out of the product or process, or eliminate the possibility that the error could produce a non- conformance.

Process Flow Charting

  Process flow charting is a visual approach to describing and developing sequential or related work activities. It provides both a means of communication and analysis for planning, development activities, and manufacturing processes.Since one goal of quality assurance is to eliminate non-conformities and improve the efficiency of manufacturing and assembly processes, advanced product quality plans should include illustrations of the controls and resources involved. These process flow charts should be used to identify improvements and to locate significant or critical product and process characteristics that will be addressed in control plans to be developed later.


Quality Function Deployment (QFD)

QFD is a systematic procedure for translating customer requirements into technical and operational terms, displaying and documenting the translated information in matrix form. QFD focuses on the most important items and provides the mechanism to target selected areas to enhance competitive advantages.Depending upon the specific product, the technique of QFD may be used as a structure for the quality planning process. In particular, QFD Phase I – Product Planning translates customer requirements into counterpart control characteristics or design requirements. QFD provides a means of converting general customer requirements into specified final product and process control characteristics.  

A.  ASPECTS OF QFD

The two dimensions of QFD are:

  • Quality Deployment: Translation of Customer Requirements into Product Design Requirements.
  • Function   Deployment:  Translation   of   Design   Requirements   into appropriate Part, Process and Production Requirements.

B.  BENEFITS OF QFD

  • Increases the assurance of meeting the customer requirements.
  • Reduces number of changes due to increased engineering understanding of customer requirements.
  • Identifies potentially conflicting design requirements.
  • Focuses various company activities on customer-oriented objectives.
  • Reduces product development cycle time.
  • Reduces costs of engineering, manufacturing, and service.
  • Improves quality of product and services.
TEAM FEASIBILITY COMMITMENT

Customer:                                                                                          Date:                                                                                               

Part Number:                                                                 

Part Name:                                                                                                                  

Revision Level                                                      

Feasibility Considerations

Our product quality planning team has considered the following questions.

The drawings and/or specifications provided have been used as a basis for analyzing the organizations ability to meet all specified requirements. All “no” answers are supported with attached comments identifying our concerns and/or proposed changes to enable the organization to meet the specified requirements.  

YESNOCONSIDERATION
  Is product adequately defined (application requirements, etc. to enable feasibility evaluation?
  Can Engineering Performance Specifications be met as written?
  Can product be manufactured to tolerances specified on drawing?
  Can product be manufactured with process capability that meet requirements?
  Is there adequate capacity to produce product?
  Does the design allow the use of efficient material handling techniques?
  Can the product be manufactured within normal cost parameters? Abnormal cost considerations may include:
  – Costs for capital equipment?
  – Costs for tooling?
  – Alternative manufacturing methods?
  Is statistical process control required on the product?
  Is statistical process control presently used on similar products?
  Where statistical process control is used on similar products:
  – Are the processes in control and stable?
  – Does process capability meet customer requirements?

Conclusion 

         

Feasible            Product can be produced as specified with no revisions.

Feasible            Changes recommended (see attached).

Not Feasible     Design revision required to produce product within the specified requirements.   Approval 

Team Member/Title/Date  Team Member/Title/Date  Team Member/Title/Date  

Team Member/Title/Date  Team Member/Title/Date  Team Member/Title/Date  
  1. Under “required,” for each item indicate the number of characteristics required. Under “acceptable,” for each item, indicate the quantity that was accepted per customer requirements. Under “pending,” for each item, indicate the quantity not accepted. Attach action plan for each item.
  2. Indicate if control plan has been approved by the customer (if required) by circling yes or no. If yes, indicate date approved. If no, attach action plan.
  3. Under “samples,” indicate the quantity of samples inspected for each item. Under “characteristics per sample,” for each item indicate the number of characteristics inspected on each sample for each category.Under “acceptable,” for each item indicate the quantity of characteristics acceptable on all samples.Under “pending,” for each item indicate the quantity of characteristics not accepted. Attach action plan for each item.
  4. Under “required,” for each item indicate the number of characteristics required. Under “acceptable,” for each item indicate the quantity acceptable per Chrysler, Ford and General Motors Measurement Systems Analysis Reference Manual.Under “pending,” for each item, indicate quantity not accepted. Attach action plan for each item.
  5. Under “required,” for each item indicate the quantity required. Under “acceptable,” for each item, indicate the quantity accepted. Under “pending,” for each time, indicate quantity not accepted. Attach action plan for each item.
  6. Under “required,” for each item indicate yes or no to indicate if item is required. Under “acceptable,” for each item indicate yes or no to indicate acceptance. Under “pending,” if answer under “acceptable” is no – attach action plan.
  7. Each team member should sign form and indicate title and date of signature.

For Control plan click here

AIAG-Production part approval process(PPAP)

Production Part Approval Process (PPAP) defines generic requirements for production part approval, including production and bulk materials. The purpose of PPAP is to determine if all customer engineering design record and specification requirements are properly understood by the organization and that the manufacturing process has the potential to produce product consistently meeting these requirements during an actual production run at the quoted production rates.

PPAP shall apply to internal and external organization sites supplying production parts, service parts, production materials, or bulk materials. For bulk materials, PPAP is not required unless specified by the authorized customer representative. An organization supplying standard catalog production or service parts shall comply with PPAP unless formally waived by the authorized customer representative.
NOTE: See customer-specific requirements for additional information. All questions about PPAP
should be addressed to the authorized customer representative. A customer can formally waive PPAP requirements for an organization. Such waivers can only be issued by an authorized customer representative. An organization or supplier requesting a waiver of a PPAP requirement should contact the authorized customer representative. The organization or supplier should obtain documentation of waivers from the authorized customer representative. Catalog parts (e.g., bolts) are identified and/or ordered by functional specifications or by recognized industry standards.

Section 1: General

Submission of PPAP

The organization shall obtain approval from the authorized customer representative for:

  1. a new part or product (e.g., a specific part, material, or color not previously supplied to the
    specific customer).
  2. correction of a discrepancy on a previously submitted part.
  3. product modified by an engineering change to design records, specifications, or materials

NOTE: If there is any question concerning the need for production part approval, contact the authorized customer representative.

Section 2 — PPAP Process Requirements

2.1 Significant Production Run

For production parts, product for PPAP shall be taken from a significant production run. This significant production run shall be from one hour to eight hours of production, and with the specific production quantity to total a minimum of 300 consecutive parts, unless otherwise specified by the authorized customer representative. This significant production run shall be conducted at the production site, at the production rate using the production tooling, production gaging, production process, production materials, and production operators. Parts from each unique production process, e.g., duplicate assembly line and/or Work. cell, each position of a multiple cavity die, mold, tool or pattern, shall be measured and representative parts tested.

For Bulk materials: No specific number of “parts” is required. The submitted sample shall be taken in a manner as to assure that it represents “steady-state” operation of the process.
NOTE: For bulk material, production histories of current products may often be used to estimate the initial process capability or performance of new and similar products. In cases where no production history of a similar bulk material product or technology exists, a containment plan may be put into effect until sufficient production has demonstrated capability or performance, unless otherwise specified by the customer.

2.2 PPAP Requirements

The organization shall meet all specified PPAP requirements listed below and also meet all customer-specific PPAP requirements. Production parts shall meet all customer engineering design record and specification requirements including safety and regulatory requirements. Bulk Material PPAP requirements are defined by a completed Bulk Material Requirements Checklist . If any part specifications cannot be met, the organization shall document their problem-solving efforts and shall contact the authorized customer representative for concurrence in determination of appropriate corrective action.
NOTE: Items or records may not necessarily apply to every customer part number from every organization. For example, some parts do not have appearance requirements, others do not have color requirements, and plastic parts may have polymeric part marking requirements. In order to determine with certainty which items must be included, consult the design record, e.g., part print, the relevant Engineering documents or specifications, and your authorized customer representative.

1. Design Record

The organization shall have the design record for the saleable product/part, including design records for components or details of the saleable product/part. Where the design record is in electronic format, e.g., CAD/CAM math data, the organization shall produce a hard copy (e. g., pictorial, geometric dimensioning & tolerancing [GD&T] sheets, drawing) to identity measurements taken.
NOTE : For any saleable product, part or component, there will only be one design record, regardless of who has design—responsibility. The design record may reference other documents making them part of the design record. A single design record can represent multiple part or assembly configurations, e. g., a sub-frame assembly with various hole configurations for different applications. For parts identified as black box , the design record specifies the interface and performance requirements. For parts identified as catalog parts, the design record may consist only of a functional specification or a reference to a recognized industry standard. For bulk materials, the design record may include identification of raw materials, formulations, processing steps and parameters, and final product specifications or acceptance criteria. If dimensional results do not apply, then CAD/CAM requirements are also not applicable.

a) Reporting of Part Material Composition

The organization shall provide evidence that the Material/Substance Composition reporting that is required by the customer has been completed for the part and that the reported data complies with all customer—specific requirements
NOTE: This materials reporting may be entered into the IMDS (International Materials Data System) or – other customer-specified system/method. IMDS is available through http://www.mdsystem.com/index.jsp.

b) Marking of Polymeric Parts

Where applicable, the organization shall identify polymeric parts with the ISO symbols such as specified in ISO 11469, “Plastics-Generic Identification and marking of plastic products” and/or ISO 1629, “Rubber and lattices-Nomenclature.” The following weight criteria shall determine if the marking requirement is applicable:

  • Plastic parts weighing at least 100g (using ISO 11469/1043-1)
  • Elastomeric parts weighing at least 200g (using ISO 11469/ 1629)

NOTE: Nomenclature and abbreviation references to support the use of ISO 11469 are contained in ISO 1043-1 for basic polymers and in ISO 1043-2 for fillers and reinforcements.

2. Authorized Engineering Change documents

The organization shall have any authorized engineering change documents for those changes not yet recorded in the design record but incorporated in the product, part or tooling.

3 Customer Engineering Approval

Where specified by the customer, the organization shall have evidence of customer engineering approval.
NOTE: For bulk materials, this requirement is satisfied by a signed ‘Engineering Approval” line item on the Bulk Material Requirements Checklist and/or inclusion on a customer maintained list
of approved materials.

4 Design Failure Mode and Effects Analysis (Design FMEA)- if the organization is product design-responsible

The product design-responsible organization shall develop a Design FMEA in accordance with, and compliant to, customer-specified requirements (e.g., Potential Failure Mode and Effects Analysis reference manual).
NOTE : A single Design FMEA may be applied to a family of similar parts or materials.

5 Process Flow Diagram

The organization shall have a process flow diagram in an organization—specified format that clearly describes the production process steps and sequence, as appropriate, and meets the specified customer needs, requirements and expectations (e.g., Advanced Product Quality Planning and Control Plan reference manual). For bulk materials, an equivalent to a Process Flow Diagram is a Process Flow Description.
NOTE: Process flow diagrams for ‘families’ of similar parts are acceptable if the new parts have been reviewed for commonality by the organization.

6 Process Failure Mode and Effects Analysis (Process FMEA)

The organization shall develop a Process FMEA in accordance with, and compliant to, customer-specified requirements, (e.g., Potential Failure Mode and Effects Analysis reference manual).

NOTE : A single Process FMEA may be applied to a process manufacturing a family of similar parts or materials if reviewed for commonality by the organization.

7 Control Plan

The organization shall have a Control Plan that defines all methods used for process control and complies with customer-specified requirements (e.g., Advanced Product Quality Planning and Control Plan reference manual).
NOTE : Control Plans for “families” of parts are acceptable if the new parts have been reviewed for commonality by the organization. Control Plan approval may be required by certain customers.

8 Measurement System Analysis Studies

The organization shall have applicable Measurement System Analysis studies, e.g., gage R&R, bias, linearity, stability, for all new or modified gages, measurement, and test equipment.
NOTE : Gage R&R acceptability criteria are defined in the Measurement Systems Analysis reference manual. For bulk materials, Measurement System Analysis may not apply. Customer agreement should be obtained on actual requirements.

9 Dimensional Results

The organization shall provide evidence that dimensional verification required by the design record and the Control Plan have been completed and results indicate compliance with specified requirements. The organization shall have dimensional results for each unique manufacturing process, e.g., cells or production lines and all cavities, molds, patterns or dies . The organization shall record, with the actual results: all dimensions (except reference dimensions), characteristics, and specifications as noted on the design record and Control Plan. The organization shall. indicate the date of the design record, change level, and any authorized engineering change document not yet incorporated in the design record to which the part was made. The organization shall record the change level, drawing date, organization name and part number-on all auxiliary documents (e.g., supplementary layout results sheets, sketches, tracings, cross sections, CMM inspection point results, geometric dimensioning and tolerancing sheets, or other auxiliary drawings used in conjunction with the part drawing). Copies of these auxiliary materials shall accompany the dimensional results according to the Retention] Submission Requirements Table. A tracing shall be included when an optical comparator is necessary for inspection. The organization shall identify one of the parts measured as the master sample.
NOTE: The Dimensional Results form , a pictorial, geometric dimensioning & tolerancing [GD&T] sheets, or a checked print Where the results are legibly written on a part drawing including cross-sections, tracings, or sketches as applicable may be utilized for this purpose. Dimensional results typically do not apply to bulk materials.

10 Records of Material / Performance Test Results

The organization shall have records of material and/or performance test results for tests specified on the design record or Control Plan.
Material Test Results
The organization shall perform tests for all parts and product materials when chemical, physical, or metallurgical requirements are specified by the design record or Control Plan. Material test results shall indicate and include:

  • the design record change level of the parts tested;
  • any authorized engineering change documents that have not yet been incorporated in the
  • design record;
  • the number, date, and change level of the specifications to which the part was tested;
  • the date on which the testing took place;
  • the quantity tested;
  • the actual results; .
  • the material supplier’s name and, when required by the customer, the customer—assigned supplier/vendor code.

NOTE: Material test results may be presented in any convenient format.

For products with customer-developed material specifications and a customer-approved supplier list, the organization shall procure materials and/or services (e.g., painting, plating, heat-treating, welding) from suppliers on that list.

Performance Test Results

The organization shall perform tests for all part(s) or product material(s) when performance or functional requirements are specified by the design record or Control Plan. Performance test results shall indicate and include:

  • the design record change level of the parts tested;
  • any authorized engineering change documents that have not yet been incorporated in the design record;
  • the number, date, and change level of the specifications to which the part was tested;
  • the date on which the testing took place;
  • the quantity tested;
  • the actual results.

NOTE: Performance test results may be presented in any convenient format.

11. Initial Process Studies

a) General

The level of initial process capability or performance shall be determined to be acceptable prior to submission for all Special Characteristics designated by the customer or organization. The Organization shall obtain customer concurrence on the index for estimating initial process _ capability prior to submission. The organization shall perform measurement system analysis to understand how measurement error affects the study measurements.
NOTE : Where no special characteristics have been identified, the customer reserves the right to require demonstration of initial process capability on other characteristics. The purpose of this requirement is to determine if the production process is likely to produce product that will meet the customer’s requirements. The initial process study is focused on variables not. attribute data. Assembly errors, test failures, surface defects are examples of attribute data, which is important to understand, but is not covered in this initial study. To understand the performance of characteristics monitored by attribute data will require more data collected over time. Unless approved by the authorized customer representative, attribute data are not acceptable for PPAP submissions. Cpk and Ppk are described below. Other methods more appropriate for certain processes or products may be substituted with prior approval from an authorized customer representative. Initial process studies are short-term and will not predict the effects of time and variation in people, materials, methods, equipment, measurement systems, and environment. Even for these short-term studies, it is important to collect and analyze the data in the order produced using control charts. For those characteristics that can be studied using X-bar and R charts, a short-term study should be based on a minimum of 25 subgroups containing at least 100 readings from consecutive parts of the significant production run. The initial process study data requirements may be replaced by longer-term historical data from the same or similar processes, with customer concurrence. For certain processes, alternative analytical tools such as individual and moving range charts may be appropriate and permitted with prior approval from an authorized customer representative.

b) Quality Indices
Initial process studies shall be summarized with capability or performance indices, if applicable.
NOTE: The initial process study results are dependent on the purpose of the study, method of data acquisition, sampling, amount of data, demonstration of statistical control, etc.For guidance on items listed below, contact the authorized customer representative.
Cpk The capability index for a stable process. The estimate of sigma is based on within subgroup
variation (R-bar/d2 or S-bar/c4). Cpk an indicator of process capability based on process variation within each subgroup of a set of data. Cpk, does not include the effect of process variability between the subgroups. Cpk is an indicator of how good a process could be if all process variation between subgroups was to be eliminated. Therefore, use of Cpk alone may be an incomplete indicator of process performance.
Ppk– The performance index. The estimate of sigma is based on total variation (all of individual sample data using the standard deviation [root mean square equation], “s”). Ppk is an indicator of process performance based on process variation throughout the full set of data. Unlike Cpk, Ppk is not limited to the variation within subgroups. However, Ppk cannot isolate within subgroup variation from between subgroup variation. When calculated from the same data set, Cpk and Ppk can be compared to analyze the sources of process variation.
Initial Process Studies. The purpose of the initial process study is to understand the process variation, not just to achieve a specific index value. When historical data are available or enough initial data exist to plot a control chart (at least 100 individual samples), Cpk can be calculated when the process is stable. Otherwise, for processes with known and predictable special causes and output meeting specifications, Ppk should be used. When not enough data are available (< 100 samples) or there are unknown sources of variation, contact the authorized customer representative to develop a suitable plan.
For Initial Process Studies involving more than one process stream, additional appropriate statistical methods or approaches may be required. For bulk material, the organization should obtain customer agreement regarding the appropriate techniques for initial process studies, if required, in order to determine an effective estimate of capability.

c) Acceptance Criteria for Initial Study

The organization shall use the following as acceptance criteria for evaluating initial process study results for processes that appear stable.

ResultsInterpretation
Index > 1.67The process currently meets the acceptance criteria.
1.33 3≤ Index≤ 1.67The process may be acceptable. Contact the authorized
customer representative for a review of the study results.
Index < 1.33The process does not currently meet the acceptance criteria.
Contact the authorized customer representative for a review of
the study results.

NOTE : Meeting the initial process study capability acceptance criteria is one of a number of customer requirements that leads to an approved PPAP submission.

d) Unstable Processes
Depending on the nature of the instability, an unstable process may not meet customer requirements. The organization shall identify, evaluate and, wherever possible, eliminate special causes of variation prior to PPAP submission. The organization shall notify the authorized customer representative of any unstable processes that exist and shall submit a corrective action plan to the customer prior to any submission.
NOTE: For bulk materials, for processes with known and predictable special causes and output meeting specifications, corrective action plans may not be required by the customer.

e) Processes With One-Sided Specifications or Non-Normal Distributions
The organization shall determine with the authorized customer representative alternative acceptance criteria for processes with one-sided specifications or non-normal distributions.
NOTE: The above mentioned acceptance criteria assume normality and a two-sided Specification (target in the center). When this is not true, using this analysis may result in unreliable information. These alternate acceptance criteria could require a different type of index or some method of transformation of the data. The focus should be on understanding the reasons for the non-normality (e. g., is it stable over time?) and managing variation.

f) Actions To Be Taken When Acceptance Criteria Are Not Satisfied
The organization shall contact the authorized customer representative if the acceptance criteria cannot be attained by the required PPAP submission date. The organization shall submit to the authorized customer representative for approval a corrective action plan and a modified Control Plan normally providing for 100% inspection. Variation reduction efforts shall continue until the acceptance criteria are met, or until customer approval is received.
NOTE : 100% inspection methodologies are subject to review and concurrence by the customer. For bulk materials, 100% inspection means an evaluation of a sample(s) of product from a continuous process or homogeneous batch which is representative of the entire production run.

12 Qualified Laboratory Documentation

Inspection and testing for PPAP shall be performed by a qualified laboratory as defined by customer requirements (e.g., an accredited laboratory). The qualified laboratory (internal or external to the organization) shall have a laboratory scope and documentation showing that the laboratory is qualified for the type of measurements or tests conducted. When an external/commercial laboratory is used, the organization shall submit the test results on the laboratory letterhead or the normal laboratory report format. The name of the laboratory that performed the tests, the date of the tests, and the standards used to run the tests shall be identified.

13 Appearance Approval Report (AAR)

A separate Appearance Approval Report (AAR) shall be completed for each part or series of parts if the product/part has appearance requirements on the design record. Upon satisfactory completion of all required criteria, the organization shall record the required information on the AAR. The completed AAR and representative production products/parts shall be submitted to the location specified by the customer to receive disposition. AARs (complete with part disposition and authorized customer representative signature) shall then accompany the PSW at the time of final submission based upon the submission level requested. See customer-specific requirements for any additional requirements.
NOTE: AAR typically applies only for parts with color, grain, or surface appearance requirements. Certain customers may not require entries in all AAR fields.

14 Sample Production Parts

The organization shall provide sample product as specified by the customer.

15 Master Sample

The organization shall retain a master sample for the same period as the production part approval records, or a) until a new master sample is produced for the same customer part number for customer approval, or b) where a master sample is required by the design record, Control Plan or inspection criteria, as a reference or standard. The master sample shall be identified as such, and shall show the customer approval date on the sample. The organization shall retain a master sample for each position of a multiple cavity die, mold, tool or pattern, or production process unless otherwise specified by the customer.
NOTE: When part size, sheer volume of parts, etc. makes storage of a master sample difficult, the
sample retention requirements may be modified or waived in writing by the authorized customer
representative. The purpose of the master sample is to assist in defining the production standard, especially where data is ambiguous or in insufficient detail to fully replicate the part to its original approved state. Many bulk material properties are by their nature time dependent, and if a master sample is required, it may consist of the manufacturing record, test results, and certificate of analysis of key ingredients, for the approved submission sample.

16 Checking Aids

If requested by the customer, the organization shall submit with the PPAP submission any part-specific assembly or component checking aid. The organization shall certify that all aspects of the checking aid agree with part dimensional requirements. The organization shall document all released engineering design changes that have been incorporated in the checking aid at the time of submission. The organization shall provide for preventive maintenance of any checking aids for the life of the part . Measurement system analysis studies, e. g., gage R & R, accuracy, bias, linearity, stability studies, shall be conducted in compliance with customer requirements.
NOTE: Checking aids can include fixtures, variable and attribute gages, models, templates, mylars
specific to the product being submitted. Checking aids, etc. typically do not apply to Bulk Materials. If checking aids are used for bulk materials, the organization should contact the authorized customer representative regarding this requirement.

17 Customer-Specific Requirements

The organization shall have records of compliance to all applicable customer-specific requirements. For bulk materials, applicable customer-specific requirements shall be documented on the Bulk Material Requirements Checklist.

18 Part Submission Warrant (PSW)

Upon completion of all PPAP requirements, the organization shall complete the Part Submission Warrant (PSW). A separate PSW shall be completed for each customer part number unless otherwise agreed to by the authorized customer representative. If production parts will be produced from more than one cavity, mold, tool, die, pattern, or production process, e. g., line or cell, the organization shall complete a dimensional evaluation on one part from each. The specific cavities, molds, line, etc., shall then be identified in the “Mold/Cavity/Production Process” line on a PSW, or in a PSW attachment. The organization shall verify that all of the measurement and test results show conformance with customer requirements and that all required documentation is available and, for Level 2, 3, and 4, is included in the submission as appropriate. A responsible official of the organization shall approve the PSW and provide contact information.
NOTE: One warrant per customer part number can be used to summarize many changes providing that the changes are adequately documented, and the submission is in compliance with customer program timing requirements. PSWs may be submitted electronically in compliance with customer requirements.

a) Part Weight (Mass)
The organization shall record on the PSW the part weight of the part as shipped, measured and expressed in kilograms to four decimal places (0.0000) unless otherwise specified by the customer. The weight shall not include shipping protectors, assembly aides, or packaging materials. To determine part weight, the organization shall individually weigh ten randomly selected parts, calculate and report the average weight. At least one part shall be measured rom each cavity, tool, line or process to be used in product realization. ‘
NOTE: This weight is used for vehicle weight analysis only and does not affect the approval process. Where there is no production or service requirement for at least ten parts, the organization should use the required number for calculation of the average part weight. For bulk materials, the part weight field is not applicable.

Section 3-Customer Notification And Submission Requirements

3.1 Customer Notification

The organization shall notify the authorized customer representative of any planned changes to the design, process, or site. Examples are indicated in the table below .
NOTE: Organizations are responsible to notify the authorized customer representative of all changes to the part design and/or the manufacturing process. Upon notification and approval of the proposed change by the authorized customer representative, and after change implementation, PPAP submission is required unless otherwise specified.

Examples of changes requiring notificationClarifications
Use of other construction or material than was used in the previously approved part or productFor example, other construction as documented on a deviation (permit) or included as a note on the design record and not covered by an engineering change
Production from new or modified tools (except perishable tools), dies, molds patterns, etc. including additional or replacement toolingThis requirement only applies to tools, which due to their unique form or function, can be expected to influence the integrity of the final product. It is not meant to describe standard tools (new or repaired), such as standard measuring devices, drivers (manual or power), etc.
Production following upgrade or rearrangement of existing tooling or equipment.Upgrade means the reconstruction and/or modification of a tool or machine or to increase the capacity, performance, or change its existing function. This is not meant to be confused with normal maintenance, repair or replacement of parts, etc., for which no change in performance is to be expected and post repair verification methods have been established. Rearrangement is defined as activity that changes the sequence of product/process flow from that documented in the process flow diagram (including the addition of a new process). Minor adjustments of production equipment may be required to meet safety requirements such as, installation of protective covers, elimination of potential ESD risks, etc.
Production from tooling and equipment transferred to a different plant site or from an additional plant site.Production process tooling and /or equipment transferred between buildings or facilities at one or more sites.
Change of supplier for parts, non-equivalent materials, or services (e. g., heat-treating, plating).The organization is responsible for approval of supplier provided material and services.
Product produced after the tooling has been inactive for volume production for twelve months or more.For product that has been produced after tooling has been inactive for twelve months or more:
Notification is required when the part has had no change in active purchase order and the existing tooling has been inactive for volume production for twelve months or more. The only exception is when the part has low volume, e.g., service or specialty vehicles. However a customer may specify certain PPAP requirements for service parts.
Product and process changes related to components of the production product
manufactured internally or manufactured by
suppliers.
Any changes, including changes at the suppliers to the organization and their suppliers, that affect customer requirements, e.g., fit, form, function, performance, durability.
Change in test/inspection method – new technique (no effect on acceptance criteria)For change in test method, the organization should have evidence that the new method has measurement capability equivalent to the old method.
Additionally, for bulk materials:
a) New source of raw material from new or
existing supplier.
b) Change in product appearance attributes
These changes would normally be expected to have an effect on the performance of the product.

3.2 Submission to Customer

The organization shall submit for PPAP approval prior to the first production shipment in the following situations unless the authorized customer representative has waived this requirement NOTE: In the situations described below, prior notification to, or communication with, the authorized customer representative is assumed.
The organization shall review and update, as necessary, all applicable items in the PPAP file to reflect the production process, regardless of whether or not the customer requests a formal submission. The PPAP file shall contain the name of the authorized customer representative granting the waiver and the date.

RequirementClarifications
A new part or product (Le. a specific part,material, or color not previously supplied to the customer)Submission is required for a new product (initial release) or a previously approved product that has a new or revised product/part number (e.g., suffix) assigned to it. A new part/product or material added to a family may use appropriate PPAP documentation from a
previously approved part within the same product family.
Correction of a discrepancy on a previously submitted part.Submission is required to correct any discrepancies on a previously submitted part. A “discrepancy” can be related to:
-The product performance against the customer requirements
-Dimensional or capability issues
-Supplier issues
-Approval of a part replacing an interim approval
-Testing, including material, performance, or engineering validation
issues
Engineering change to design records,specifications, or materials for production product/part numbers(s).Submission is required on any engineering change to the production product/part design record, specifications or materials.
Additionally, for Bulk Materials:
Process technology new to the organization, not previously used for this product.

Section 4 — Submission To Customer – Levels of Evidence

4.1 Submission Levels
The organization shall submit the items and/or records specified in the level identified below

Level 1Warrant only (and for designated appearance items, an Appearance Approval Report) submitted to the customer.
Level 2Warrant with product samples and limited supporting data submitted to the customer.
Level 3Warrant with product samples and complete supporting data submitted to the customer.
Level 4Warrant and other requirements as defined by the customer.
Level 5Warrant with product samples and complete support data reviewed at the organization’s manufacturing location.

The organization shall use level 3 as the default level for all submissions unless otherwise specified by the authorized customer representative. The minimum submission requirement for bulk materials is the PSW and the Bulk Materials Checklist. For Bulk Material PPAP submissions, check “Other” in the Reason for Submission Section on the PSW form and specify “Bulk Material.” This indicates that the ”Bulk Material Requirements Checklist” was used to packet. specify the PPAP requirements for the bulk material and shall be included in the submission

NOTE: The authorized customer representative may identify a submission level, different from the default level, that is to be used with each organization, or organization and customer part number combination. Different customer locations may assign different submission levels to the same organization manufacturing location. All of the forms referenced in this document may be replaced by computer-generated facsimiles. Acceptability of these facsimiles is to be confirmed with the authorized customer representative prior to the first submission.

Retention/Submission Requirements
Table lists submission and retention requirements. Mandatory and applicable requirements for a PPAP record are defined in the PPAP manual and by the customer.

Section 5 – Part Submission Status

5.1 General

Upon approval of the submission, the organization shall assure that future production continues to meet all customer requirements
NOTE: For those organizations that have been classified as “self certifying” (PPAP submission level 1) by a specific customer, submission of the required organization-approved documentation will be considered as customer approval unless the organization is advised otherwise.

5.2 Customer PPAP Status

a) Approved
Approved indicates that the part or material, including all sub—components, meets all customer
requirements. The organization is therefore authorized to ship production quantities of the product, subject to releases from the customer scheduling activity.

b) Interim Approval
Interim Approval permits shipment of material for production requirements on a limited time or piece quantity basis. Interim Approval will only be granted when the organization has:

  • clearly defined the non-compliances preventing approval; and,
  • prepared an action plan agreed upon by the customer. PPAP re—submission is required to obtain a status of “approved.”

Note: The organization is responsible for implementing containment actions to ensure that only
acceptable material is being shipped to the customer. Parts with a status of “Interim Approval” are not to be considered “Approved.” Material covered by an interim approval that fails to meet the agreed—upon action plan, either by the expiration date or the shipment of the authorized quantity, will be rejected. No additional shipments are authorized unless an extension of the interim approval is granted. For bulk materials, the organization shall use the “Bulk Material Interim Approval” form, or its equivalent.

5.2.3 Rejected

Rejected means that the PPAP submission does not meet customer requirements, based on the production lot from which it was taken and/or accompanying documentation. In such cases, the submission and/or process, as appropriate, shall be corrected to meet customer requirements. The submission shall be approved before production quantities may be shipped.

Section 6 — Record Retention

PPAP records , regardless of submission level, shall be maintained for the length of time that the part is active plus one calendar year. The organization shall ensure that the appropriate PPAP records from a superseded part PPAP file are included, or referenced in the new part PPAP file.
NOTE: An example of an appropriate document/record that should be carried forward from the old file to the new part file would be a material certification from a raw material supplier for a new part that represents only a dimensional change from the old palt number. This should be identified by conducting a PPAP “gap analysis” between the old and new part numbers.

Part Submission Warrent

PART INFORMATION

1.Part Name and 2a. Customer Part Number: Engineering released finished end item part name and number.
2b. Org, Part Number: Part number defined by the organization, if any.

3.Shown on Drawing Number: The design record that specifies the customer part number being submitted.
4. Engineering Change Level & Date: Show the change level and date of the design record.

5.Additional Engineering Changes & Date: List all authorized engineering changes not yet incorporated in the design record but which are Incorporated in the part.

6.Safety and/or Government Regulation: “Yes” if so indicated by the design record, otherwise “No.”

7.Purchase Order Number: Enter this number as found on the contract/purchase order.

8.Weight: Enter the actual weight in kilograms to four decimal places unless otherwise specified by the customer.
9./10. Checking Aid Number, Change Level and Date: If requested by the customer, enter the checking aid number, its change level and date.

ORGANIZATION MANUFACTURING INFORMATION

11.Organization Name & Supplier/Vendor Code: Show the name and code assigned to the manufacturing site on the purchase order/contract.

12.Street Address, Region, Postal Code, Country: Show the complete address of the location where the product was manufactured.For “Region,” enter state, county, province, etc.

CUSTOMER SUBMITTAL INFORMATION

13 Customer Name/Division: Show the corporate name and division or operations group.

14 Buyer/Buyer Code: Enter the buyer’s name and code.

15 Application: Enter the model year, vehicle name, engine, transmission, etc.

MATERIALS REPORTING

16. Substances of Concern: Enter “Yes,” “No,” or “n/a”.
IMDS/Other Customer Format: Circle either “IMDS” or “Other Customer Format” as appropriate. If submitted via IMDS include: Module ID #, Version #, and Creation Date. If submitted via other customer format, enter the date customer confirmation was received.

17. Polymeric Parts Identification: Enter “Yes,” “No,” or “n/a”.

REASON FOR SUBMISSION

18. Check the appropriate box(es). For bulk materials, in addition to checking the appropriate box, check “Other” and write “Bulk Material” in the space provided.

SUBMISSION LEVEL

19. SUBMISSION LEVEL: Identify the submission level requested by the customer.

SUBMISSION RESULTS

20. Check the appropriate boxes for dimensional, material tests, performance tests, appearance evaluation, and statistical data.

21. Check the appropriate box. If “no,” enter the explanation in “comments” below.

22. Molds/Cavities/Production Processes: For instruction, see Part Submission Warrant

DECLARATION

23. Enter the number of pieces manufactured during the significant production run.

24. Enter the time (in hours) taken for the significant production run.

25. EXPLANATION/COMMENTS: Provide any explanatory comments on the Submission Results or any deviations from the Declaration. Attach additional information as appropriate.
26 CUSTOMER TOOL TAGGING/NUMBERING: Are customer-owned tools identified in accord with IATF 16949 and any customer-specific requirements, answer “Yes” or “No.” May not be applicable to OEM internal suppliers.
27 ORGANIZATION AUTHORIZED SIGNATURE: A responsible organization official, after verifying that the results show conformance to all customer requirements and that all required documentation is available, shall approve the declaration and provide Title, Phone Number, Fax Number, and Email address.
FOR CUSTOMER USE ONLY: Leave Blank

Appearance Approval Report

1.Customer part number: Engineering released customer part number.
2. Drawing Number: Use the number of the drawing on which the part is shown if different from
the part number.
3.Application: Enter the model year and vehicle or other program on which the part is used.
4.Part Name: Use the finished part name on the part drawing.
5.Buyer Code: Enter the code for specific buyer of part.
6/7. E/C Level & Date: Engineering change level and E/C date for this submission.
8. Organization Name: Organization responsible for submission (include supplier if applicable)
9. Manufacturing Location: Location Where part was manufactured or assembled.
10.Supplier/Vendor Code: Customer-assigned code for organization location where the part was
manufactured or assembled.
11.Reason for Submission: Check box(es) explaining the reason for this submission.
12.Organization Sourcing & Texture Information: List all first surface tools, graining source(s),
grain type(s), and grain and gloss masters used to check part
13. Pre-Texture Evaluation: To be completed by authorized customer representative
14. Color Suffix: Use alphanumeric or numeric color identification.
15. Tristimulus Data: List numerical (colorimeter) data of submission part as compared to the
customer-authorized master.
16. Master Number: Enter alphanumeric master identification.
17. Master Date: Enter the date on which the master was approved.
18. Material Type: Identify first surface finish and substrate (e.g., paint/ABS).
19. Material Source: Identify first surface and substrate suppliers. Example : Redspot /Dow.
20. Color Evaluation, Hue, Value, Chroma, Gloss and Metallic Brilliance: Visual assessment by
customer.
21. Color Shipping Suffix: Color part number suffix or color number.
22. Part Disposition: To be determined by customer (approved or rejected).
23. Comments: General comments by the organization or customer (optional).
24. Organization Signature, Phone No. & Date: Organization certification that the document
information is accurate and meets all requirements specified.
25. Authorized Customer Representative Signature & Date: Authorized Customer
Representative approval signature.
THE AREAS INSIDE THE BOLD LINES ARE FOR CUSTOMER USE ONLY.

Production Part Approval, Dimensional Results

Production Part Approval, Material Test Results

Production Part Approval, performance Test Results

Bulk Material – Specific Requirements

Introduction

All organization supplying bulk materials shall comply with the requirements in this Appendix or use guidance herein for clarification of PPAP. The requirements in this Appendix are minimums and may be supplemented at the discretion of the organization and/or the customer.

Applicability

Organizations are responsible for applying PPAP to their suppliers of ingredients which have organization- designated special characteristics. Where OEM PPAP approval of a bulk material exists, evidence of that approval is sufficient as the PPAP submission at other levels in the supply chain. Examples of bulk material include, but are not limited to: adhesives and sealants (solders, elastomers); chemicals (rinses, polishes, additives, treatments, colors/pigments, solvents); coatings (top coats, undercoats, primers, phosphates, surface treatments); engine coolants (antifreeze); fabrics; film and film laminates; ferrous and nonferrous metals (bulk steel, aluminum, coils, ingots); foundry (sand/silica, alloying materials, other minerals/ores); fuels and fuel Components; glass and glass components; lubricants (oils, greases, etc.); monomers, pre-polymers and polymers (rubbers, plastics, resins and their precursors); and performance fluids (transmission, power steering, brake, refrigerant).

Bulk Materials Requirements Checklist
For bulk material, the PPAP elements required are defined by the Bulk Materials Requirements Checklist. Any customer-specific requirements shall be documented on the Bulk Materials Requirements Checklist. Use the Bulk Materials Requirements Checklist as follows:

  • Required / Target Date: For each item listed in the checklist either enter a target date for completion of the element or enter “NR” for Not Required.
  • Primary Responsibility- Customer: Identify by name or function the individual who Will
  • review and approve the element.
  • Primary Responsibility – Organization: Identify by name or function the individual who will
  • assemble and assure the completeness of the element to be reviewed. 1
  • Comments / Conditions: Identify any qualifying information or references to attached documents that provide specific information regarding the element. For example, this may include specific formats to be used for the Design Matrix or acceptable tolerances for Measurement System Analysis (MSA) studies.
  • Approved by: Enter the initials of the authorized customer representative who has reviewed and accepted the element.
  • Plan agreed to by: Identify the individuals (and their functions) who made and agreed upon the project plan.

Design Matrix

Organizations supplying bulk material generally deal with the chemistry and functionality of the product being designed. Use of these suggestions will arrive at the same end point of a completed Design FMEA, but with greater applicability to bulk materials. For bulk materials, a Design Matrix, when required, shall be prepared prior to developing the Design FMEA. The Design Matrix determines the complex interactions of formula ingredients, ingredient characteristics, product characteristics, process constraints, and conditions for customer use. High impact items can then be effectively analyzed in the Design FMEA.

Design Matrix —— Elaboration
This matrix correlates customer expectations with the product design items. Construct the Design Matrix referring to the example which will follow:

  1. Along the horizontal axis, list the Functions (Desired Attributes/Potential Failure Modes).
  2. Along the vertical axis, list the design items as Potential Causes (Category/Characteristics) :
    • Formula Ingredients
    • Ingredient Characteristics
    • Product Characteristics
    • Process Constraints
    • Conditions for Use (customer process constraints)
  3. For each design item, enter the current robust threshold range levels and units.
  4. Correlate the potential causes to the potential failure modes using a number, letter, or symbol representing the impact or strength of the relationship. Ask what would happen if a potential cause item is allowed to go under or over its robust minimum or maximum, respectively.
  5. After completion of the rankings in the Design Matrix, review the category/characteristics for a preliminary assessment of Special Characteristics. Designate any Special Characteristics in column 1.
  6. The high negative impact potential causes are transferred to the Design FMEA for analysis.

Negative impact on Customer expectation: High=3, Medium=2, Low =1,None=0, Unknown=?

Design FMEA

Effects of Failure and Severity Rankings

The following two steps provide an alternative method for identifying the Potential Effects of Failure and assigning a Severity Ranking.
List Effects of Failure

  • Consumer Effects- General terms identifying the loss experienced by the ultimate user of the product (e. g. the vehicle buyer).
  • Customer Effects- General terms identifying the loss experienced by the intermediate user of your product (e g., the vehicle manufacturer).

Assign a Severity Ranking to each Effect

  • See the Severity Definition and Evaluation Criteria in the Potential Failure Mode and Effects Analysis reference manual.
  • The goal for each of the items that multiply to arrive at the Risk Priority Number is to differentiate between the items in that category. The following figure provides a guideline for severity rankings. If your situation only uses a small portion of the scale then develop your own scale to improve the differentiation. If your situation is greater than two tiers back from the final consumer, then the guideline figure should be adjusted to reflect the effects that will be felt by your customer’s customer.

Potential Cause(s)/Mechanisms of Failure and Design Matrix
From the Design Matrix (if used), list the high negative impact characteristics as the Potential Causes/Mechanisms of Failure which are associated with Potential Failure Modes. Mechanisms are generally described as over or under a certain threshold. These thresholds define the boundaries of the product approval and subsequent requirements for change notification.

Likelihood of Occurrence Rankings
The following step provides an alternate method for assigning Occurrence ratings.
Rank Occurrence – the ranking scale in the Potential Failure Mode and Effects Analysis manual is
difficult to relate to bulk materials and generally results in very low numbers with little differentiation in the ultimate risk. The following matrix is recommended as a replacement. It evaluates the frequency of occurrence based upon observed evidence the formulator has in the design.

Actual Experience: Obtained from appropriate experimentation on the-specific final product and the potential failure mode.
Similar Experience: Based upon similar products or processes and the potential failure model.
Assumption: Based upon a clear understanding of the chemical impact of the material and the
potential failure mode.
Frequency ranking clarifications:

  • High is defined as – Repeated failures
  • Moderate is defined as – Occasional failures
  • Low is defined as – Relatively few failures

Current Design Controls

Design Control: Supplementing the Failure Mode Effects and Analysis manual bulk material design controls may also include:

  • Designed Experiments (DOE’ s)— List experiment #’ s
  • Customer validation tests and trial runs – e. g. gravelometer panels, fender sprayouts (list customer reference #’ S).
  • Test protocols- list Test Methods, Standard Operating Procedures etc.
  • Variation of supplier specifications.
  • Formulating practice robust ranges.

Design controls identified by a number should be available so that the relevant content of that control can be understood.

Likelihood of Detection Rankings

The next step provides an alternate method for assigning Detection rankings.
Rank Detection – the ranking scale in the Potential Failure Mode and Effects Analysis manual is difficult to relate to bulk materials and generally results in very low numbers with little differentiation in the ultimate risk. The following matrix may be used. It evaluates the Detection as the ability of the current Design Control to actually detect a cause of failure and/or failure mode based upon the assessed Testing Method R&R’s percent of specification range) and the quality of evidence.

DOE (Response Surface Analysis): Symmetric design space analyzed with appropriate statistical tools.
Screening Experiments: Screening design or ladder evaluation strategically set to develop DOE.
Assumption/Experience: Information/data based upon similar products or processes.
Note: The above R&R limits are suggested unless otherwise agreed upon by the customer and
organization. R&R calculations can initially be based using design matrix thresholds.

Process FMEA

Special Characteristics

If product characteristics/ attributes can have normal variation resulting in movement outside their design-intended robust range which results in significant impact, they are designated special, and must be controlled by special controls.

Special Characteristics – Elaboration
For clarification purposes, the following figure is intended to demonstrate the flow of potential special characteristics through the supply chain.

Control Plan

The Bulk Material Control Plan serves as a mechanism to:

  • Highlight Special Product/Process Characteristics and their controls
  • Link together sources of control methods, instructions and specification/tolerance limits and reference them in one document

Additionally, this control plan is not intended to recreate specificationand/or tolerance limits that exist in other control sources such as batch tickets, work instructions and testing protocols.

Control Plan – Elaboration

Refer to the customer’s specified control plan format

  • Prototype (when required) — A listing of tests, evaluations and their associated specifications/tolerances used to assess an experimental or developmental formulation. This may be the only control plan that is product specific.
  • Pre-launch – Documentation of the product/process control characteristics, process controls affecting Special Characteristics, associated tests, and measurement systems employed during product scale up and prior to normal production.
  • Production – Documentation of the product/process control characteristics, process controls affecting Special Characteristics, associated tests, and measurement systems employed during normal production. Additional items may be included at the Organization’s discretion.

Pre-launch and production control plans may be applied to a family of products or specific processes.

Measurements system Analsis MSA Studies

Bulk materials often require further processing after sampling in order to make a measurement. Measurements are often destructive in nature and this prevents retesting the same sample. Measurement variability is often much larger for properties important in the process industries (e.g. viscosity and purity) than it is for properties measured in mechanical industries (e.g., dimensions). Measurement may account for 50% or more of the total observed variation. Standardized test methods (e. g. ASTM, AMS, ISO) are often followed. The organization need not re-verify bias, linearity, stability, and Gage R&R. MSA studies are not required where standardized tests are used, however it is still important for the organization to understand the measurement component of variation in the test methods used. Customer agreement on the actual requirements for MSA for either non-standard test methods or “new-to-supplier” test methods should be obtained during the planning phase. Any MSA studies should be applied to each test method associated with Special Characteristics, and not to each individual product measured by the test method. Therefore, the MSA studies should be conducted as broadly as possible across all products which use a particular test method. If the resulting variability is unacceptable, then either the studies should be conducted on a narrower class of products or action should be taken to improve the test method.

Initial Process Studies for special Characteristics

The manufacture of bulk materials consists of industries which span a variety of production processes, from high volume products to specialty products produced in small quantities no more than once or twice per year. Often the production process is completed or already in place before sufficient samples can be tested. By the time the product is made again, personnel and/or equipment may have changed. Also, these processes have numerous input variables, many control variables, and a variety of product variations. There are non-linearities – meaning for example that doubling the change in a particular input does not necessarily double the change in the output. The effects and relationships between all these variables and controls are also not usually known Without error. Multiple processes are usually interconnected, sometimes with feedback loops. There are also timing considerations and delays in reaction time. Further, measurements of component variables are generally less precise that measurements of component parts, such that in many cases correlated variables must be used

Master Sample

The requirements for master sample or equivalent shall be agreed by the customer and organization.
Physical Sample: Some bulk materials are stable and unchanging over an extended period of time (e.g., they do not significantly change physical or chemical composition, if properly stored, for decades). In this case, a physical sample will serve as a Master Sample.
Analytical Sample Record: Other bulk materials change with time, but can be precisely quantified by appropriate analytical techniques. In this case the analytical record (e.g., Ultra—Violet or Infra—Red spectra “fingerprint,” Atomic Absorption or Gas Chromatographic—Mass Spectrometric analysis) is an appropriate Master Sample.
Manufacturing Sample Record: When bulk materials can not be distinctly identified or change over time, a manufacturing sample record should be generated. The record Should include the information required to manufacture a “normal production size” run (lot or batch), according to the final “Production Control Plan” supporting the PSW. This record provides an “audit trail” to the information which may be stored in various documents and or electronic systems. The following is the basic information suggested to accomplish this task:

  • The quantity of product produced.
  • The important performance results.
  • The raw materials utilized (including manufacturer, Lot #and important properties records).
  • The critical equipment required to manufacture the bulk material.
  • Analytical sample records, as described above, on the material as produced.
  • Batch ticket used to manufacture the bulk material.

Part Submission Warrant

A Part Submission Warrant shall be prepared and submitted for approval when required by the customer. If a customer agrees that PPAP is not required, no warrant needs to be prepared. The information required by the Submission Warrant which does not apply to bulk material (e. g., part weight, dimensional measurement) does not need to be provided. For those organizations that have been classified as “self certifying” by a specific customer, submission of a warrant signed only by the organization shall be evidence of PPAP approval, unless the organization is advised otherwise. For all other organizations, evidence of PPAP approval shall be a warrant signed by both the authorized customer representative and organization or other customer approval documents.

Interim Approval
Most products will achieve approval prior to initial use. In cases where approval cannot be obtained, a “Bulk Material Interim Approval” may be granted, A form is shown on the facing page; other forms may
be used.
COMPLETION OF THE BULK MATERIAL INTERIM APPROVAL FORM

1.ORGANIZATION NAME: Name assigned to Organization’s manufacturing location.
2. PRODUCT NAME: The Organization’s designated name for the product—as identified in the Customer’s Engineering Release Documents.
3. SUPPLIER/VENDOR CODE: Code (DUNS number or equivalent) assigned to the manufacturing location as shown on the Customer’s purchase order.
4. ENG. SPEC: Customer’s identified Specification through which the product is approved and released.
5. MANUF. SITE: Physical address of the manufacturing location as shown on the Customer’s purchase order.
6. PART #: Customer’s Part Number.
7.ENG. CHANGE #: Formula Revision Level or number identifying the formula.
8. FORMULA DATE: Engineering Release Date of the formula identified in item #7.
9. RECEIVED DATE: Customer Use Only.
10. RECEIVED BY: Customer Use Only (Customer Representative).
11.SUBMISSION LEVEL: Submission Level (1—5) that Organization is required to submit to as defined by the Customer.
12. EXPIRATION DATE: Date that the Interim Approval expires.
13. TRACKING CODE: Customer Use Only.
14. RE-SUBMISSION DATE: Date organization will resubmit for production approval.
15. STATUS: For each item, enter appropriate code (NR- Not Required, A-Approved, I-Interim).
16.SPECIFIC QUANTITY OF MATERIAL AUTHORIZED: Utilized when Interim Approval specifies a specific quantity of volume of product.
17.PRODUCTION TRIAL AUTHORIZATION: Customer’s Engineering Release authorizing the use of the product in the Customer’s facility.
18.REASON(S) FOR INTERIM APPROVAL: Indicate reason for Interim Request.
19. ISSUES TO BE RESOLVED, EXPECTED COMPLETION DATE: For each item marked as “I” in #15, provide explanatory details regarding problem issues and furnish a date for problem resolution.
20.ACTIONS TO BE ACCOMPLISHED DURING INTERIM PERIOD, EFFECTIVE DATE: What is being done to ensure defective product is contained, date when the action was implemented and Exit Criteria necessary to end need for continuing the action or its individual. elements.
21. PROGRESS REVIEW DATE: Update on progress of problem resolution, generally the midpoint from issuance to expiration of the interim period.
22. DATE MATERIAL DUE TO PLANT: Date material is due to Customer’s site.
23. WHAT ACTIONS ARE TAKING PLACE TO ENSURE THAT FUTURE SUBMISSIONS WILL CONFORM TO ALL PPAP REQUIREMENTS BY THE SAMPLE PROMISE DATE? Why won’t this happen again?
24. ORGANIZATION: Responsible and Authorized Organization official to ensure compliance to the above mentioned actions and dates. ‘
25.PRODUCT ENG.: Product Engineer’s signature, printed name, phone number, and date.
26. MATERIALS ENG.: Material Engineer’s signature, printed name, phone number, and date.
27. QUALITY ENG: Quality Engineer’s signature, printed name, phone number, and date.
28. INTERIM APPROVAL NUMBER: Customer Use Only.

Customer Plant Connection .

1 Customer’s Responsibilities
The customer plant connection is a shared responsibility between the organization supplying bulk material and the customer. This connection defines the interaction of specific customer plant processing steps with Special Characteristics and final product attributes of the bulk material. This interaction is especially significant when bulk materials undergo chemical or physical transformation(s). Three key components of the Customer Plant Connection are the development of a Customer Process Matrix, determination of Special Characteristics from the Customer Process Matrix, and the preparation of a Control Plan which systematically directs corrective actions. For bulk materials, conducting the steps outlined in this “Customer Plant Connection” is highly recommended.
NOTE: It is not the intent of PPAP to compromise proprietary information.
2 Customer Plant Connection — Clarification
The following is applicable to materials that are transformed fi‘om bulk (e.g., wet can of paint) to final product (e. g., cured paint film). This may not be applicable to all bulk materials (i.e. washer fluid, engine oil, etc.). It is recognized by the organization that it is their responsibility to deliver the product to the customer with the characteristics of the bulk material per organization and customer agreement. The impact of the transformation of bulk materials by the customer plant on final product attributes may be accounted for in the customer’s application process. During the transformation from bulk product to final product, both bulk product characteristics and final product attributes may be impacted by customer process controls. PPAP does not require a Process FMEA or Control Plan for the customer process. Since the product is frequently two products (bulk and finished), there is a shared responsibility for the final product attribute. For example, percent solids and viscosity of a bulk coating which impacts the final coating’s film build attribute, may be affected by the customer’s mix room percent solvent reduction. The percent reduction process parameter may therefore be controlled to aid in control of film build. The process steps at customer plants may be matrixed versus the Special Characteristics (determined jointly by the organization and the customer). Where high impact is evident, those process steps may be analyzed by the Process FMEA methodology. The Special Characteristics may then be determined, and be included in a Control Plan for the customer process. These special control characteristic items may be monitored and continuously improved.

3. Customer Plant Connection – Guidelines
The following is a recommended set of guidelines for the customer plant when implementing process controls for bulk materials.

  1. Assemble cross—functional teams of customer personnel for each customer process area.
    Include appropriate organization representatives on each team.
  2. Select Champions for each team – these are the customer process owners (i.e., chief process
    engineer, area supervisor, etc.).
  3. Define critical customer handling, application steps and process parameters in each area.
  4. Review the organization’s Design Matrix and Design FMEA items for application functions
    which have been designated as Special Characteristics. Also review the desired final product
    attributes for items needing control.
  5. From #4, develop a list of Special Characteristics and Attributes.
  6. Construct a Customer Process Matrix, using #3 as the top, and #5 as the side. of a matrix.
  7. Perform a Customer Process FMEA, focusing on the high‘impact customer process areas which impact the Special Characteristics.
  8. Determine Special Characteristics from the Customer Process Matrix and PF MEA (e.g., paint fluid flow, gun distance, etc.).
  9. Prepare a Control Plan for each affected customer process area. The plan might contain at a minimum all process steps containing Special Characteristics.
  10. Monitor and record all Special Characteristics by appropriate means (control charts, checklists, etc.).
  11. Ensure stability of Special Characteristics and continuously improve where possible.

Tires – Specific Requirements

1 Introduction and Applicability
An organization supplying tires shall comply with the requirements of PPAP. This Appendix is to be used as guidance for clarification of requirements unless otherwise specified by the authorized OEM customer representative. Performance testing, based upon design requirements used by each OEM to select tire construction (technical approval), reduces the need to repeat all tests during PPAP. Specific PPAP confirmation tests are specified by each OEM.

2 Guidelines for PPAP Requirements
Significant Production Run: Unless otherwise specified by the OEM, the size of the production run for the PPAP parts is a minimum of 30 tires.
NOTE: The above definition applies to all uses of “significant production run” within PPAP. The typical development of a new tire design involves multiple builds of a small quantity of tires. Most designs are basic to the organization’s process. For the tire industry, PPAP is typically completed with an initial mold or molds, and well in advance of customer requirements for large volume production. The PPAP for the tire industry typically is derived from 1 to 8 hours of tire curing from the approved production process as specified in the organization’s control plan. PPAP is not required for additional molds that are brought on line in the approved production process.All additional molds shall be certified by the organization’s internal certification criteria and documentation. For tires, tooling is defined as the tire mold. This definition of tooling applies to all uses of “tooling” within PPAP.

Material Test Results: Testing is applicable only to finished tires and not to raw materials. Tire industry practice does not require chemical, physical, or metallurgical testing. Material test results are not required for PPAP.
Special Characteristics: Tire uniformity (force variation) and balance are designated Special Characteristics.
Appearance Approval Report (AAR): The AAR requirement is not applicable.
Master Sample: Master samples are not retained.
Process Flow Diagrams: See above

Checking Aids: Checking aids are not required.
PPAP Submission Warrant: Reporting of multiple cavities, molds, lines, etc. on the PSW is not required for tires.
Part Weight (Mass): PPAP tires are weighed to two (2) significant decimals (XX.XX). The average is reported on the PSW to four (4) decimals (XX.XXXX)

3 Submission to Customer – Levels of Evidence
Retention/Submission Requirements : Records of items submitted (S) and retained (R) are maintained at appropriate locations designated by the organization.

Truck Industry – Specific Requirements

Introduction
An organization supplying to subscribing truck OEMs shall comply with the requirements in this Appendix or use guidance herein for clarification of PPAP. The requirements in this Appendix are minimums and may be supplemented at the discretion of the organization and/or the customer.

Applicability
The following additional requirements are added:

  • The Customer has the right to request a PPAP at any time to re-qualify a production component.
  • Feature Base Process or Part Number Generated components are PPAP qualified using the highest content configuration to qualify the master part number. All other configurations may be approved with the submission of a PSW linking the new part number with the master part number.
  • For bulk material and standard catalog parts, the organization shall formally qualify their product to their design record and submit a PSW when requested by the customer.

Significant Production Run

It is important that adequate quantities of parts be manufactured during this run to confirm the quality and capability of production process at rate prior to full production. It is recognized that in low volume applications, sample sizes as small as 30 pieces may be utilized for preliminary process capability studies. When performing the Significant Production Run, all aspects of variability within the production process should be considered and tested where practicable, e.g., set-up variability or other potential process related issues identified within the PFMEA. Sample sizes must be discussed and agreed to early in the APQP process. If projected volumes are so low
that 30 samples are not attainable prior to production, interim PPAP approval may be granted. A
dimensional report with 100% inspection on special characteristics is required during the interim period. Once the 30 consecutive production samples are produced, measured, and the quality index calculated and accepted, then the interim approval is changed to approved.

Dimensional Results
The organization shall submit, as part of the PPAP package, a copy of the drawing with each dimension, test, and or specification identified with a unique number. These unique numbers shall be entered onto the dimensional or test results sheet as applicable, and actual results entered onto the appropriate sheets. The organization shall also identify the print zone for each numbered characteristic as applicable.

Material Test
The organization shall also submit a completed Design Verification Plan and Report that summarizes appropriate performance and functional test results.

Quality Indices
When the customer specifies special characteristics and the estimated annual usage is less than 500 pieces the organization shall document in their control plan that they will either perform 100% inspection and record the results or conduct an initial process capability study with a minimum of 30 production pieces and maintain SPC control charts of the characteristics during production.
For special characteristics that can be studied using variables data, the organization shall utilize One of the following techniques to study the stability of the process:
X-Bar and R Charts, n=5, plot minimum 6 subgroups or Individual X – Moving Range, plot minimum 30 data points.
When performing the initial process study, data shall be plotted from consecutive parts taken from the production trial run. These studies could be augmented or replaced by long-term results from the same or similar process run on the same equipment with prior customer concurrence.

Master Sample
The master sample shall be retained after PPAP approval when specified by the Customer.

Part Submission Warrant
When specified by the customer, organizations shall use the Truck Industry PSW

Part Weight (mass)
The organization may record the part weight of the part submitted on the PSW measured and expressed in kilograms to four significant figures (e.g., lOOOKg, 100.0Kg, 10.00Kg, and 1.000Kg) unless otherwise specified by the customer. To determine part weight, the organization shall individually weigh ten randomly selected parts, and calculate and report the average weight. At least one part shall be measured from each cavity, tool, line, or process used in product realization.

Customer Notification
The organization shall notify the customer of any planned design and process changes. The customer may subsequently elect to require a submission for PPAP approval. Organizations supplying to subscribing truck OEMs are required to complete the Product Process Change notification form to advise of forthcoming process or proprietary product changes.

Completion of the Part Submission Warrant
PART INFORMATION
1. Part Name: Engineering released finished end item part name.
2. Customer Part Number(s): Engineering released finished end item part number.
3. Part Revision Level: if applicable.
4. Tool Purchase Order Number: if applicable.
5. Engineering Drawing Change Level & Approval Date: Show change level and date for submission.

6. Additional Engineering Changes: Include all authorized engineering change documents and approval dates not yet incorporated on the drawing but which are incorporated in the part.

7. Shown on Drawing Number: The design record that specifies the customer part number being submitted.

8. Purchase Order Number: Enter this number as found on the purchase order.
9. Part weight: Enter the actual weight in kilograms to four significant places.

10 Checking Aid Number: Enter the checking aid number, if one is used for dimensional inspection,
11. Its Engineering Change Level and Approval Date.

ORGANIZATION MANUFACTURING INFORMATION

12. Organization Name and Code: Show the code assigned to the manufacturing location on the purchase order.

13. Organization Manufacturing Address: Show the complete address of the location where the product was manufactured.

SUBMISSION INFORMATION

14. Customer Name/Division: Show the corporate name and division or operations group.

15. Contact Name: Enter the name of your customer contact.

16. Application: Enter the model year, vehicle name, or engine, transmission, etc.

17. Check the appropriate box to indicate Substances of Concern/ISO marking reporting.

REASON FOR SUBMISSION

18. Check the appropriate box. Add explanatory details in the “other” section.

REQUESTED SUBMISSION LEVEL

19. Identify the submission level requested by your customer. Check the submission items if a level 4 is requested.

DECLARATION

20. Explanation/Comments: Provide any explanatory details on the submission results; additional information may be attached as appropriate.

21. Enter the number or code that identifies the specific mold, cavity, and/or production process used to manufacture the sample parts.

22.The responsible supplier official, after verifying that the results show conformance to all customer requirements and that all required documentation is available, shall approve the declaration and provide Title, Phone Number, Email Address, and Fax Number.

FOR CUSTOMER USE ONLY: Leave blank.

IATF 16949:2016 clause 9.3.1.1 Management review, Clause 9.3.2.1 Management review inputs and Clause 9.3.3.1 Management review outputs

The purpose of conducting management reviews of the QMS is to gauge the health of the QMS. The review must determine QMS suitability, adequacy and effectiveness. Are the QMS resources and controls that were planned and implemented, suitable and adequate for the QMS to be effective in achieving customer and regulatory requirements; and in achieving quality objectives? Are changes needed to improve product, processes and use of resources? The process must address the frequency, schedule, quorum and agenda for review meetings to be attended by top management. For the management review process itself to be effective, top management must plan the review of all agenda items with some regularity to gauge the health of the QMS and take timely action to change or improve any part of it, including the quality policy and objectives. To avoid problems on frequency and scope of review, an effective way would be incorporate QMS agenda items into regular monthly or quarterly operational meetings. Some OEM’s require management review to be held not less than once a year The review of QMS deployment and performance might be measured through gap analysis for new systems and the results of internal audits for established systems. Management review must include the results of such analysis and audits. The costs of internal and external poor quality as well as process metrics for all processes must be measured and evaluated against business objectives and customer satisfaction goals. Management review input should preferably be in summary form, showing QMS and operational performance measured against the business and quality plans, customer and regulatory objectives and goals. Appropriate actions must result from such reviews.  Review decisions and actions must relate to improving products and processes or even creating new ones; providing more resources or perhaps improving the efficiency of existing resources; improving QMS controls; policies and objectives; and improving overall QMS effectiveness and customer satisfaction. Responsibilities and timelines should accompany these decisions and actions. The performance of these actions must be followed up at subsequent management review meetings. Performance indicators to measure the effectiveness of the management review process could include – achievement of quality objectives and improvement in customer satisfaction rating. You must identify and document the management review process as part of your QMS . You must also identify what specific documents are needed for effective planning, operation and control of this process . These documents may include – a documented procedure; review schedule; agenda and action forms; etc., combined with unwritten practices, procedures and methods. Management review records must include topics discussed; decisions; responsibilities for corrective or improvement actions and related timelines; provision of resources; and follow-up actions from previous management reviews. 

clause 9.3.1.1 Management review

In addition the the requirements given ISO 9001:2015 Clause 9.3.1 Management review , clause 9.3.1.1 requires that Management review must be conducted at least annually. The frequency of management review(s) must be increased based on risk to compliance with customer requirements resulting from internal or external changes impacting the quality management system and performance-related issues.

please click here for ISO 9001:2015 Clause 9.3.1 Management review

Conducting management reviews at least annually is a fundamental requirement of a Quality Management System (QMS) . However, it’s crucial to recognize that the frequency of management reviews should be flexible and responsive to changes and risks that could impact the QMS and its ability to meet customer requirements. Here’s why this approach is important:

  1. Adaptive to Change: The business environment is dynamic, and internal or external changes can have a significant impact on the QMS. Increasing the frequency of management reviews when changes occur allows the organization to quickly assess and respond to new challenges or opportunities.
  2. Risk Management: Changes in the internal or external environment can introduce new risks or modify existing ones. By conducting more frequent management reviews in response to heightened risk, the organization can proactively address potential compliance and performance-related issues.
  3. Customer Focus: The primary goal of a QMS is to meet customer requirements and enhance customer satisfaction. Adapting the frequency of management reviews based on risks to compliance ensures that customer needs and expectations are consistently met.
  4. Continuous Improvement: Frequent management reviews enable the organization to continuously monitor its processes, identify areas for improvement, and implement corrective actions promptly.
  5. Operational Agility: Increasing the frequency of management reviews in response to performance-related issues ensures that the organization remains agile and can swiftly address any operational shortcomings.
  6. Regulatory Compliance: Regulatory requirements and standards may evolve over time. More frequent management reviews can help ensure ongoing compliance with these changing requirements.
  7. Data-Driven Decision-Making: Frequent management reviews provide a steady flow of up-to-date information that top management can use to make informed decisions and guide the organization’s strategic direction.
  8. Organizational Learning: Conducting management reviews more frequently facilitates organizational learning and enhances the organization’s ability to adapt and innovate.
  9. Stakeholder Engagement: By reviewing and addressing changes and risks on a more regular basis, the organization can better engage stakeholders and demonstrate its commitment to quality and continuous improvement.
  10. Performance Monitoring: Frequent management reviews allow for real-time monitoring of performance-related metrics and indicators, helping the organization maintain a high level of operational excellence.
  11. Efficient Problem Solving: More frequent reviews enable quicker identification and resolution of problems, minimizing disruptions and potential negative impacts.
  12. Cultural Emphasis on Quality: A culture of quality and continuous improvement is reinforced when management places a strong emphasis on regularly evaluating and enhancing the QMS.

The periodicity of management reviews should be matched to the evidence that demonstrates the effectiveness of the system. Initially the reviews should be frequent, say monthly, until it is established that the system is effective. Thereafter the frequency of reviews can be modified. If performance has already reached a satisfactory level and no deterioration appears within the next three months, extend the period between reviews to six months. If no deterioration appears in six months extend the period to twelve months. It is unwise to go beyond twelve months without a review as something is bound to change that will affect the system. Shortly after a reorganization (the launch of a new product/service, breaking into a new market, securing new customers, etc.), a review should be held to establish if performance has changed. After new technology is planned, a review should be held before and afterwards to measure the effects of the change. Your procedures need to state the criteria for scheduling the reviews. Don’t set them at a specific period, other than a maximum interval, as it limits your flexibility. You can define the interval between reviews in the minutes of the review meeting, thereby giving you the flexibility to change the frequency when desirable.

Clause 9.3.2.1 Management review inputs

In addition the the requirements given ISO 9001:2015 Clause 9.3.2 Management review input , clause 9.3.2.1 requires that Input to management review to include cost of poor quality (cost of internal and external non conformance); measures of process effectiveness; measures of process efficiency; product conformance; assessments of manufacturing feasibility made for changes to existing operations and for new facilities or new product; customer satisfaction ; review of performance against maintenance objectives ;warranty performance if applicable;review of customer scorecards if applicable; identification of potential field failures identified through risk analysis (such as FMEA) ; actual field failures and their impact on safety or the environment.

please click here for ISO 9001:2015 Clause 9.3.2 Management review inputs

Cost of poor quality

A management review is an essential part of a quality management system, where the organization’s top management evaluates the performance of the system and makes decisions for improvement. Including the cost of poor quality in this review can provide valuable insights into the effectiveness of your quality processes. Here’s what you might want to include:

  1. Cost of Internal Nonconformance: This refers to the expenses incurred due to quality issues within your organization. It includes the cost of rework, scrap, retesting, and any other resources required to rectify nonconforming products or processes before they leave your premises. It’s important to calculate these costs accurately to understand the impact on your operations.
  2. Cost of External Nonconformance: External nonconformance refers to quality issues that are identified after your products or services have reached customers or the market. This includes costs associated with customer complaints, returns, replacements, warranty claims, legal issues, and damage to your brand reputation. Calculating these costs helps you gauge the effects of poor quality on customer satisfaction and your organization’s financial health.
  3. Data Analysis and Reporting: Provide detailed data and analysis regarding the instances of internal and external nonconformance. Present trends, patterns, and the frequency of occurrences. Visual aids such as graphs and charts can make the data more accessible and understandable for management.
  4. Root Cause Analysis: Include information about the root causes of the nonconformances. Highlight any recurring issues or common factors contributing to poor quality. This will help management identify areas for improvement and allocate resources effectively.
  5. Corrective and Preventive Actions: Outline the corrective and preventive actions that have been taken to address the identified nonconformances. Describe the effectiveness of these actions and whether they have successfully prevented similar issues from recurring.
  6. Impact on Overall Performance: Discuss how the cost of poor quality has impacted the organization’s financial performance, customer satisfaction, and overall business objectives. This could include missed opportunities, increased operational costs, and potential lost revenue.
  7. Continuous Improvement Initiatives: Propose strategies for continuous improvement to reduce the cost of poor quality. These could include process optimization, employee training, quality control enhancements, and other measures to prevent nonconformance from occurring in the future.
  8. Future Goals and Targets: Set specific goals and targets related to reducing the cost of poor quality. Outline a plan for how these goals will be achieved and the expected impact on the organization’s bottom line and reputation.
  9. Management Decision and Action: Based on the information presented, management should make informed decisions regarding resource allocation, process changes, and quality improvement initiatives. These decisions should be documented and tracked for accountability.

Remember, the goal of including the cost of poor quality in a management review is to provide a clear picture of how quality issues are affecting the organization and to drive continuous improvement efforts.

Measures of process effectiveness;

Including measures of process effectiveness in the management review of a Quality Management System (QMS) is crucial for evaluating the overall performance of the system and making informed decisions for improvement. These measures help management understand how well the organization’s processes are functioning, whether they are achieving their intended objectives, and where adjustments or enhancements might be needed. Here are some key points to consider:

  1. Key Performance Indicators (KPIs): Identify and present relevant KPIs that provide insights into process effectiveness. These could include metrics such as on-time delivery, customer satisfaction scores, defect rates, rework rates, cycle times, and other indicators that directly reflect the performance of your processes.
  2. Process Efficiency: Evaluate the efficiency of your processes by analyzing factors like resource utilization, waste reduction, and process cycle times. If you’ve implemented process improvements, provide data that demonstrates their impact on efficiency.
  3. Process Stability and Control: Assess the stability and control of your processes by utilizing statistical process control (SPC) charts, control limits, and other tools. Highlight instances where processes have remained within acceptable control limits and flag any trends that could indicate instability.
  4. Root Cause Analysis: Share information about root cause analysis conducted for any process deviations or failures. This could include detailing the methods used to identify the underlying causes, the corrective actions taken, and the outcomes of those actions.
  5. Customer Feedback: Incorporate feedback from customers related to your processes. Discuss any trends in customer complaints, suggestions, or comments that can provide insights into process effectiveness and areas for improvement.
  6. Process Audits and Inspections: Provide results from internal and external audits or inspections of your processes. Highlight any findings, corrective actions taken, and improvements made based on audit recommendations.
  7. Employee Input: Gather input from employees who are directly involved in the processes. Their perspectives can offer valuable insights into process effectiveness and areas where adjustments could lead to improvements.
  8. Comparison to Objectives: Compare the actual performance of processes to the objectives set in your QMS. This helps management assess whether processes are meeting their intended goals and if any adjustments are necessary.
  9. Trends and Patterns: Present trends and patterns in process performance over time. Visual representations like trend charts or graphs can help management easily identify improvements, declines, or areas of inconsistency.
  10. Benchmarks and Best Practices: Compare your process effectiveness to industry benchmarks or best practices. This can provide context and help management understand where your organization stands in relation to others in the same field.
  11. Impact on Business Goals: Discuss how process effectiveness contributes to achieving overall business objectives, such as increased revenue, reduced costs, improved customer satisfaction, and enhanced competitiveness.
  12. Continuous Improvement: Propose actionable suggestions for continuous improvement based on the analysis of process effectiveness. Highlight areas where focused efforts could lead to meaningful enhancements.

By including measures of process effectiveness in your management review, you enable top management to make informed decisions about allocating resources, prioritizing improvements, and ensuring the QMS aligns with the organization’s strategic goals.

Measures of process efficiency

Including measures of process efficiency in the management review of a Quality Management System (QMS) is essential for evaluating the effectiveness of your organization’s processes and identifying opportunities for improvement. Process efficiency is a critical aspect of a well-functioning QMS, as it directly impacts productivity, resource utilization, and overall operational performance. Here’s why it’s important to include measures of process efficiency in your management review:

  1. Performance Evaluation: Process efficiency metrics provide a clear and quantifiable way to assess how well your processes are performing. These metrics help management understand whether processes are meeting their intended objectives and where improvements can be made.
  2. Resource Utilization: Efficiency measures allow you to gauge how effectively your resources (time, labor, materials, equipment) are being used in your processes. Management can identify areas of resource waste or inefficiency and take corrective actions.
  3. Waste Reduction: Measuring process efficiency helps you identify sources of waste, such as rework, excessive wait times, or redundant activities. By pinpointing these areas, you can implement strategies to reduce waste and optimize resource allocation.
  4. Cost Savings: Efficient processes lead to reduced operational costs. Including efficiency metrics in the management review allows management to quantify the financial impact of process improvements and prioritize initiatives that contribute to cost savings.
  5. Process Variability: Monitoring process efficiency helps identify variations in performance. Consistent and efficient processes result in lower variability, leading to more predictable outcomes and higher quality products or services.
  6. Continuous Improvement: Efficiency measures provide a basis for continuous improvement efforts. By tracking efficiency over time, you can assess the impact of improvement initiatives and identify trends that require further attention.
  7. Goal Alignment: Process efficiency metrics can be aligned with your organization’s strategic goals. Management can evaluate whether process efficiency is contributing to achieving broader objectives and adjust strategies accordingly.
  8. Operational Performance: Efficient processes lead to smoother operations, reduced bottlenecks, and faster cycle times. This can result in improved customer satisfaction, shorter lead times, and better overall performance.
  9. Decision-Making: Including efficiency metrics in the management review equips decision-makers with data-driven insights. This enables informed decisions regarding process optimization, resource allocation, and investments in technology or training.
  10. Employee Engagement: Efficient processes often lead to improved employee morale and engagement. When employees see the positive outcomes of their efforts, it can boost motivation and satisfaction.
  11. Benchmarking and Best Practices: Process efficiency metrics can be compared to industry benchmarks or best practices. This external perspective provides valuable context and helps identify areas for improvement.
  12. Communication and Transparency: Inclusion of process efficiency measures in the management review promotes transparency and open communication within the organization. It fosters a culture of accountability and continuous improvement.

When presenting measures of process efficiency in the management review, consider using visual aids such as graphs, charts, and trend analyses to make the data more accessible and understandable for management. Be prepared to discuss the implications of the efficiency metrics, any improvement initiatives undertaken, and future plans for optimizing processes.

Product conformance

Including product conformance in the management review of a Quality Management System (QMS) is a critical aspect of evaluating the overall effectiveness of your organization’s quality processes. Product conformance refers to the extent to which products meet specified requirements and standards. Here’s why it’s important to include product conformance in your management review:

  1. Customer Satisfaction: Product conformance directly impacts customer satisfaction. By reviewing product conformance data, management can assess how well your products are meeting customer expectations and identify areas for improvement to enhance customer satisfaction.
  2. Quality Performance: Product conformance is a fundamental indicator of the quality of your products. It provides insights into the consistency and reliability of your manufacturing or service delivery processes.
  3. Compliance: In many industries, products must adhere to specific regulatory and industry standards. Management needs to ensure that products are conforming to these requirements to avoid legal and compliance issues.
  4. Risk Management: Non-conforming products can pose risks to both customers and the organization. By monitoring product conformance, management can identify potential risks and take proactive measures to mitigate them.
  5. Process Evaluation: Product conformance data can reveal patterns and trends in manufacturing or service delivery processes. This information helps management identify areas of improvement and make informed decisions about process adjustments.
  6. Root Cause Analysis: If non-conforming products are identified, including details about root cause analysis in the management review helps management understand the underlying issues and take appropriate corrective actions.
  7. Continuous Improvement: Product conformance metrics provide a baseline for evaluating the impact of continuous improvement efforts. By tracking changes in conformance rates over time, management can assess the effectiveness of improvement initiatives.
  8. Supplier Performance: If your organization relies on suppliers, product conformance data can also reflect their performance. This can help management make informed decisions about supplier relationships and collaborations.
  9. Decision-Making: Product conformance data informs decision-making related to process optimization, resource allocation, training needs, and investment in quality improvement initiatives.
  10. Communication and Accountability: Including product conformance in the management review promotes transparency and accountability within the organization. It ensures that top management is aware of the current state of product quality and can take appropriate actions to drive improvements.
  11. Strategic Alignment: Product conformance data can be aligned with your organization’s strategic goals. If product quality is a key differentiator or a core part of your value proposition, monitoring conformance helps ensure alignment with strategic objectives.
  12. Benchmarking: Comparing your product conformance rates to industry benchmarks or best practices provides valuable insights into your competitive position and areas where you can excel.

When presenting product conformance data in the management review, consider providing clear and concise summaries, visual aids such as charts and graphs, and contextual information about the significance of the data. Highlight any improvement initiatives, corrective actions taken, and plans for maintaining or enhancing product conformance in the future.

Assessments of manufacturing feasibility

Including assessments of manufacturing feasibility for changes to existing operations, new facilities, or new products in the management review of a Quality Management System (QMS) is crucial for ensuring that potential risks and challenges are evaluated before implementation. This proactive approach helps maintain product quality, operational efficiency, and customer satisfaction. Here’s why it’s important to include these assessments in your management review:

  1. Risk Identification and Mitigation: Assessing manufacturing feasibility helps identify potential risks and challenges that may arise during changes to existing operations, new facility setups, or the introduction of new products. This enables management to take preventive measures and develop strategies to mitigate these risks.
  2. Resource Allocation: Evaluating manufacturing feasibility provides insights into the resources (including manpower, equipment, materials, and time) required for successful implementation. Management can allocate resources effectively and make informed decisions regarding investment and capacity planning.
  3. Operational Efficiency: Feasibility assessments allow management to identify bottlenecks, process constraints, and potential inefficiencies that could impact production or service delivery. Addressing these issues before implementation can lead to smoother operations and reduced disruptions.
  4. Quality Assurance: Changes to operations or the introduction of new products can impact product quality. Assessing manufacturing feasibility helps ensure that quality standards can be maintained or enhanced throughout the changes.
  5. Cost Management: Feasibility assessments enable management to estimate the costs associated with changes to operations, new facilities, or new products. This allows for accurate budgeting and cost control.
  6. Timeline and Project Planning: Understanding manufacturing feasibility helps in developing realistic timelines for implementation. Management can set achievable milestones and deadlines, reducing the likelihood of delays.
  7. Alignment with Strategy: Assessing manufacturing feasibility ensures that proposed changes align with the organization’s overall strategic goals and objectives. Management can evaluate whether the changes support the organization’s mission and long-term vision.
  8. Cross-Functional Collaboration: Feasibility assessments often involve input from various departments, promoting cross-functional collaboration and communication. This holistic approach ensures that all relevant perspectives are considered.
  9. Regulatory Compliance: For industries with regulatory requirements, assessing manufacturing feasibility helps identify potential compliance challenges early on. Management can ensure that changes and new initiatives meet all necessary regulations and standards.
  10. Decision-Making: Including feasibility assessments in the management review provides decision-makers with data-driven insights. This allows management to make well-informed decisions about the feasibility of proposed changes and the potential impact on the organization.
  11. Learning from Past Experiences: If the organization has undergone similar changes in the past, assessing manufacturing feasibility provides an opportunity to learn from previous experiences and apply lessons learned.
  12. Continuous Improvement: The feasibility assessment process itself can be subject to continuous improvement. Management can analyze the effectiveness of past assessments, identify areas for enhancement, and refine the assessment process over time.

When presenting assessments of manufacturing feasibility in the management review, it’s important to provide clear and comprehensive documentation of the assessment process, findings, and recommendations. Visual aids such as flowcharts, diagrams, and cost breakdowns can help convey complex information effectively. Additionally, highlighting any lessons learned or best practices from previous feasibility assessments can add value to the management review process.

Customer satisfaction

Including customer satisfaction in the management review of a Quality Management System (QMS) is essential for maintaining a customer-centric approach and ensuring that your organization’s products or services meet or exceed customer expectations. Customer satisfaction is a key indicator of the effectiveness of your quality processes and the overall success of your business. Here’s why it’s important to include customer satisfaction in your management review:

  1. Customer-Centric Focus: Customer satisfaction emphasizes the importance of meeting customer needs and preferences. Including customer satisfaction in the management review reinforces a customer-centric mindset throughout the organization.
  2. Performance Evaluation: Customer satisfaction provides direct feedback on how well your products, services, and processes are performing from the customer’s perspective. Management can evaluate the success of your QMS in delivering value to customers.
  3. Quality Assurance: Satisfied customers often indicate that your products or services are meeting quality standards and conforming to their expectations. Management can use customer satisfaction data as an assurance of product and service quality.
  4. Continuous Improvement: Customer feedback highlights areas for improvement. By analyzing customer satisfaction data, management can identify trends, recurring issues, or opportunities for enhancement that should be addressed through continuous improvement efforts.
  5. Competitive Advantage: High customer satisfaction can differentiate your organization from competitors. Including customer satisfaction data in the management review allows management to assess how well your organization is positioned in the market.
  6. Reputation Management: Satisfied customers are more likely to promote your brand and refer others. Monitoring customer satisfaction helps protect and enhance your organization’s reputation.
  7. Risk Identification: Low customer satisfaction scores or negative feedback can signal potential risks to the organization’s success. Management can identify these risks and take appropriate actions to mitigate them.
  8. Communication: Including customer satisfaction data in the management review fosters open communication between different levels of the organization. It encourages a shared understanding of customer needs and expectations.
  9. Goal Alignment: Customer satisfaction metrics can be aligned with your organization’s strategic goals. Management can assess whether customer satisfaction efforts are contributing to broader business objectives.
  10. Employee Engagement: Positive customer feedback can boost employee morale and engagement by showcasing the impact of their efforts on customer experiences.
  11. Product and Service Development: Customer feedback can provide insights for developing new products or enhancing existing ones based on customer preferences and needs.
  12. Relationship Building: Monitoring customer satisfaction fosters stronger relationships with customers. Satisfied customers are more likely to become loyal, long-term clients.

When presenting customer satisfaction data in the management review, consider providing a comprehensive analysis that includes overall satisfaction scores, specific feedback from customers, trends over time, and any actions taken based on customer feedback. Visual aids such as charts, graphs, and customer testimonials can help convey the information effectively. Discuss how customer satisfaction aligns with the organization’s goals and how it informs decisions regarding process improvements and strategic directions.

Review of performance against maintenance objectives

Including a review of performance against maintenance objectives in the management review of a Quality Management System (QMS) is crucial for ensuring the effective management of your organization’s assets and facilities. Maintenance objectives play a significant role in maintaining operational efficiency, preventing downtime, and ensuring the reliability of your processes and products. Here’s why it’s important to include this review in your management review:

  1. Asset Reliability: Reviewing performance against maintenance objectives allows management to assess the reliability and availability of critical assets. This ensures that equipment and facilities are properly maintained and capable of delivering consistent performance.
  2. Operational Continuity: Effective maintenance helps prevent unexpected breakdowns and downtime, ensuring that your organization can operate smoothly and meet production or service delivery commitments.
  3. Resource Allocation: Performance against maintenance objectives provides insights into resource utilization, including labor, materials, and time. Management can make informed decisions about resource allocation and budget planning.
  4. Cost Management: Reviewing maintenance performance allows management to evaluate the cost-effectiveness of maintenance activities. It helps identify opportunities to optimize costs while ensuring asset reliability.
  5. Compliance and Regulatory Requirements: Maintenance objectives often include compliance with regulatory standards and safety requirements. A review ensures that your organization is meeting these obligations and avoiding potential legal or operational risks.
  6. Performance Metrics: Assessing maintenance objectives provides a basis for measuring key performance indicators (KPIs) related to asset reliability, maintenance efficiency, mean time between failures (MTBF), mean time to repair (MTTR), and other relevant metrics.
  7. Root Cause Analysis: If maintenance objectives are not being met, conducting a root cause analysis can help identify underlying issues or process gaps that need to be addressed.
  8. Process Improvement: By analyzing performance against maintenance objectives, management can identify opportunities for process improvement, such as optimizing maintenance schedules, implementing predictive maintenance strategies, or enhancing maintenance training programs.
  9. Impact on Quality and Customer Satisfaction: Effective maintenance contributes to consistent product or service quality. A review ensures that maintenance practices are aligned with quality objectives, leading to improved customer satisfaction.
  10. Risk Management: Maintenance objectives help manage risks associated with equipment failures, which can lead to safety hazards, production delays, and customer dissatisfaction.
  11. Sustainability and Environmental Impact: Maintenance practices can impact energy consumption, waste generation, and environmental sustainability. Reviewing maintenance objectives allows management to assess the organization’s commitment to environmental responsibility.
  12. Continuous Improvement: Including a review of maintenance performance supports the principle of continuous improvement within the QMS. It encourages a proactive approach to identifying areas for enhancement and implementing corrective actions.

When presenting the review of performance against maintenance objectives in the management review, provide data on maintenance KPIs, relevant metrics, and trends over time. Highlight any notable achievements, challenges, or improvement initiatives related to maintenance practices. Discuss how maintenance objectives align with the organization’s broader goals and contribute to operational excellence. Visual aids, such as charts, graphs, and before-and-after comparisons, can help convey the information effectively.

Warranty performance

Including a review of warranty performance in the management review of a Quality Management System (QMS) is essential for evaluating the quality and reliability of your products or services from the customer’s perspective. Warranty performance provides valuable insights into how well your organization’s offerings meet customer expectations, and it plays a significant role in maintaining customer satisfaction and trust. Here’s why it’s important to include warranty performance in your management review:

  1. Customer Satisfaction: Warranty performance directly impacts customer satisfaction. A review of warranty data helps management understand whether products are meeting customer expectations, and whether any issues are being addressed promptly and effectively.
  2. Product Quality: Monitoring warranty performance allows management to assess the overall quality and reliability of products. Patterns in warranty claims can indicate potential design flaws, manufacturing defects, or other issues that need to be addressed.
  3. Defect Identification: Warranty performance data can highlight recurring defects or trends in product failures. This information is crucial for root cause analysis and for making improvements to prevent similar issues in the future.
  4. Continuous Improvement: Reviewing warranty performance contributes to a culture of continuous improvement. By analyzing warranty data, management can identify areas for enhancement in design, manufacturing, and quality control processes.
  5. Risk Management: Effective warranty management helps mitigate financial and reputation risks. By addressing warranty issues promptly, management can prevent escalation of problems and protect the organization’s brand image.
  6. Resource Allocation: Warranty performance review informs decisions about resource allocation for addressing warranty claims, customer support, repairs, replacements, and other related activities.
  7. Supplier Evaluation: Warranty data can provide insights into the performance of suppliers and components. Management can assess the impact of external factors on product quality.
  8. Root Cause Analysis: When warranty issues arise, a review can include details about root cause analysis and corrective actions taken. This demonstrates the organization’s commitment to addressing problems systematically.
  9. Decision-Making: Warranty performance data informs decisions regarding product design changes, process improvements, and customer support strategies.
  10. Product Development: Insights from warranty data can guide product development efforts by identifying areas for innovation and enhancements based on real-world usage and feedback.
  11. Communication: Including warranty performance in the management review promotes transparency and open communication within the organization. It ensures that management is aware of customer experiences and any challenges related to product quality.
  12. Legal and Regulatory Compliance: Warranty-related issues can have legal and regulatory implications. A review of warranty performance ensures that the organization is meeting its obligations in this regard.

When presenting warranty performance in the management review, include data on warranty claims, analysis of claim types, trends, and the effectiveness of corrective actions. Visual aids, such as warranty claim trend charts, comparison graphs, and summaries of significant warranty-related actions, can help convey the information effectively. Discuss how warranty performance aligns with the organization’s quality goals and how it informs decision-making and process improvements.

Customer scorecards

Including a review of customer scorecards in the management review of a Quality Management System (QMS) is a valuable practice for assessing your organization’s performance from the customer’s perspective. Customer scorecards provide a comprehensive and quantifiable way to evaluate how well your products, services, and processes are meeting customer expectations and requirements. Here’s why it’s important to include this review in your management review:

  1. Customer-Centric Focus: Customer scorecards emphasize the importance of meeting customer needs and preferences. Including these scorecards in the management review reinforces a customer-centric approach and helps align the organization’s efforts with customer expectations.
  2. Performance Measurement: Customer scorecards offer specific metrics and KPIs that reflect the customer’s perception of your organization’s performance. A review provides insights into how well your products and services are being received.
  3. Continuous Improvement: By analyzing customer scorecards, management can identify areas for improvement and implement strategies to enhance customer satisfaction and loyalty.
  4. Objective Feedback: Customer scorecards provide objective feedback that can guide decision-making. Management can base their actions on quantifiable data rather than assumptions.
  5. Risk Mitigation: Reviewing customer scorecards helps identify potential risks related to customer dissatisfaction, allowing management to take proactive measures to address concerns.
  6. Product and Service Development: Insights from customer scorecards can inform product and service development efforts. Management can identify areas where innovation is needed or where existing offerings can be enhanced.
  7. Competitive Analysis: Customer scorecards can include comparisons to competitors’ performance. This provides valuable insights into your organization’s competitive position and areas where you can excel.
  8. Relationship Building: By reviewing customer scorecards, management can understand the strength of customer relationships and identify opportunities to strengthen ties with key clients.
  9. Alignment with Quality Goals: Customer scorecards reflect the effectiveness of your QMS in delivering quality products and services. Management can assess whether the QMS is achieving its intended outcomes.
  10. Communication: Including customer scorecards in the management review fosters open communication and collaboration between different departments and levels of the organization.
  11. Benchmarking: Customer scorecards can be benchmarked against industry standards or best practices, providing context for your organization’s performance.
  12. Employee Engagement: Sharing positive feedback from customer scorecards can boost employee morale and engagement by showcasing the impact of their work on customer satisfaction.

When presenting customer scorecards in the management review, provide a summary of key metrics, trends over time, and any actions taken based on customer feedback. Use visual aids such as charts, graphs, and comparative analyses to convey the information effectively. Discuss how the feedback from customer scorecards aligns with the organization’s quality objectives and how it influences decisions regarding process improvements and strategic direction

Field failures

Including the identification of potential field failures through risk analysis (such as Failure Modes and Effects Analysis or FMEA), as well as reporting actual field failures and their impact on safety or the environment, in the management review of a Quality Management System (QMS) is crucial for ensuring product safety, regulatory compliance, and environmental responsibility. Here’s why it’s important to include these aspects in your management review:

  1. Risk Management: Identifying potential field failures through risk analysis (FMEA) helps the organization proactively address and mitigate risks before they escalate into actual field failures. This proactive approach prevents safety hazards, quality issues, and environmental impacts.
  2. Prevention of Harm: By addressing potential field failures early, management can take preventive measures to avoid harm to customers, end-users, employees, and the environment.
  3. Product Quality and Safety: Field failures directly impact product quality and safety. Including these failures in the management review emphasizes the importance of maintaining high-quality standards and ensuring that products meet safety requirements.
  4. Regulatory Compliance: Actual field failures and their impact on safety or the environment are often subject to regulatory oversight. Including these failures in the management review demonstrates the organization’s commitment to meeting regulatory requirements.
  5. Customer Satisfaction: Field failures can lead to customer dissatisfaction and damage the organization’s reputation. Reviewing actual failures and their impacts reinforces the importance of meeting customer expectations.
  6. Continuous Improvement: Analyzing actual field failures and their consequences contributes to a culture of continuous improvement. Management can identify areas for enhancement, refine risk analysis processes, and implement corrective actions.
  7. Root Cause Analysis: Reporting actual field failures and their impacts includes details about root cause analysis and corrective actions taken. This shows the organization’s commitment to addressing problems systematically.
  8. Environmental Responsibility: Field failures can have environmental implications. By including their impact on the environment in the management review, management can assess the organization’s environmental responsibility and compliance.
  9. Resource Allocation: Field failures can lead to unplanned resource allocation for recalls, repairs, replacements, or customer support. Reviewing these failures allows management to make informed decisions about resource allocation.
  10. Decision-Making: Including potential and actual field failures in the management review provides decision-makers with data-driven insights. It enables management to prioritize and allocate resources for risk mitigation and corrective actions.
  11. Learning and Improvement: Field failures provide learning opportunities for the organization. Management can analyze failures to prevent their recurrence, share lessons learned, and enhance the organization’s knowledge base.
  12. Transparency and Accountability: Including field failures in the management review promotes transparency and accountability within the organization. It ensures that top management is aware of potential risks, actual failures, and the organization’s responses.

When presenting potential field failures identified through risk analysis and reporting actual field failures in the management review, provide detailed information about the failures, their impacts, root cause analysis, corrective actions, and any follow-up actions taken. Use visual aids such as tables, graphs, and diagrams to enhance clarity. Discuss how these failures align with the organization’s commitment to quality, safety, regulatory compliance, and environmental sustainability.

Clause 9.3.3.1 Management review outputs

In addition the the requirements given ISO 9001:2015 Clause 9.3.3 Management review output , clause 9.3.3.1 requires that top management to document and implement an action plan when customer performance targets are not met.

please click here for ISO 9001:2015 Clause 9.3.2 Management review inputs outputs

It’s a crucial practice for top management to document and implement an action plan when customer performance targets are not met. This proactive approach helps ensure that any deviations from customer expectations are addressed promptly and effectively. Here’s why documenting and implementing an action plan in such cases is important:

  1. Accountability: Documenting an action plan holds the organization accountable for addressing issues that impact customer performance targets. It demonstrates the commitment of top management to resolving customer-related challenges.
  2. Continuous Improvement: An action plan provides a structured approach to identify root causes of underperformance and implement corrective actions. This contributes to continuous improvement and prevents recurring issues.
  3. Problem Solving: An action plan guides the organization in systematically analyzing the reasons for not meeting customer performance targets. It helps identify underlying issues and provides a framework for finding effective solutions.
  4. Resource Allocation: Documenting an action plan helps allocate necessary resources, including personnel, time, budget, and technology, to address the issues and improve customer performance.
  5. Timely Response: An action plan ensures a timely response to customer-related challenges. This helps prevent customer dissatisfaction and further escalation of issues.
  6. Prevention of Recurrence: By implementing corrective actions based on the action plan, the organization can prevent similar issues from arising in the future, enhancing long-term customer satisfaction.
  7. Communication: Documenting an action plan facilitates clear communication within the organization about the steps to be taken, responsibilities, timelines, and expected outcomes.
  8. Transparency: An action plan demonstrates transparency and commitment to improvement, both internally and externally. It shows that the organization takes customer concerns seriously.
  9. Alignment with Objectives: Implementing an action plan ensures that the organization’s actions are aligned with its objectives of meeting or exceeding customer expectations.
  10. Learning Opportunity: An action plan provides a learning opportunity for the organization. It allows the organization to learn from its mistakes and make informed decisions for future improvements.
  11. Customer Relationships: Addressing issues promptly and effectively through an action plan contributes to building and maintaining positive relationships with customers.
  12. Risk Management: Addressing issues related to customer performance targets through an action plan helps manage risks associated with customer dissatisfaction, contract breaches, and potential financial impacts.

When documenting and implementing an action plan for customer performance targets that are not met, consider the following steps:

  1. Identify the Issue: Clearly define the specific customer performance targets that were not met and the associated issues or challenges.
  2. Root Cause Analysis: Analyze the root causes of the issue to understand why the targets were not met. This may involve data analysis, process evaluation, and stakeholder input.
  3. Develop Corrective Actions: Based on the root cause analysis, develop specific corrective actions that address the identified issues and improve customer performance.
  4. Assign Responsibilities: Clearly assign responsibilities for each corrective action to individuals or teams within the organization.
  5. Set Timelines: Establish realistic timelines for the implementation of each corrective action. Ensure that deadlines are achievable and aligned with customer expectations.
  6. Allocate Resources: Determine the resources required, such as personnel, budget, technology, and training, to implement the corrective actions effectively.
  7. Monitor Progress: Regularly monitor the progress of the action plan to ensure that corrective actions are being implemented as planned.
  8. Measure Results: Assess the impact of the corrective actions on customer performance targets. Determine if the desired improvements are achieved.
  9. Documentation: Document the entire process, including the issue, root cause analysis, corrective actions, responsible parties, timelines, and outcomes.
  10. Communication: Keep stakeholders, including top management, informed about the progress of the action plan and any updates or changes.
  11. Review and Adjust: Periodically review the effectiveness of the action plan. If necessary, make adjustments based on new insights or changing circumstances.
  12. Learning and Improvement: Use the experience gained from the action plan to improve processes, enhance customer relations, and prevent similar issues in the future.

By documenting and implementing an action plan when customer performance targets are not met, top management demonstrates a commitment to customer satisfaction, quality improvement, and the success of the organization’s QMS.

IATF 16949:2016 Clause 10.3.1 Continual improvement

Continual improvement is defined as a recurring activity to increase the ability to fulfill requirements.  The ‘ability to fulfill requirements’ refers to both conforming as well as nonconforming processes. Conforming processes can be further improved; and nonconforming processes must be improved by taking corrective action to prevent recurrence. Recurring activity refers to the quality improvements – quality policy and objectives; audit results; analyses of data; etc. Continual improvement is only applicable to processes that are stable and capable (i.e. under control or conforming). It cannot be applied to nonconforming processes. Corrective action must first be taken to bring nonconforming (unstable or non-capable) processes under control, before any continual improvement can be done. The continual improvement process can be conducted by: Significant breakthrough projects that either revise or improve existing processes or lead to new processes. These are usually done by cross-functional teams outside routine operations. Small-step ongoing improvement activities conducted by personnel within existing processes. Use of continual improvement tools include:  Quality Policy and Quality objectives. Changes in product, customer base, organization ownership, management, technology, QMS standards, etc., may require changes to your quality policy and objectives. As a tool for continual improvement, it requires top management to review and understand these changes; make changes, if necessary, to the quality policy and objectives and use these changes to continue further improvement of the QMS and customer satisfaction. Audit Results – Results of product, process, process and QMS audits usually provide many opportunities to improve QMS effectiveness and efficiency. Opportunities may relate to communications; information systems; processes; controls; use of resources; technology; etc. The management representative must report these opportunities to top management as included as part of the management review agenda. They can also be reported and reviewed at regular operational meetings, etc. Other Audits – Besides product, process and QMS audits, you might find it very productive to conduct financial; health and safety; environmental; technology; product profitability; social responsibility; information and communication systems audits. You will be amazed at what you will find and improvement opportunities you will uncover. In using ‘analyses of data’ as a tool for continual improvement, use the TGR and TGW approach to classify your data for decision-making. Examples of situations which might lead to improvement projects include: machine set-up, die change, machine changeover times; cycle time; scrap; non value-added use of floor space; variation in product characteristics and process parameters; less than 100% first run capability; process averages not centered on target values; testing requirements not justified by accumulated results; waste of labor and materials; difficult manufacture, assembly and installation of product; excessive handling and storage; etc. Other tools that are often used to continually improve, include: capability studies; design of experiments; evaluation procedure; quality control chart system; risk analysis; SPC; supplier evaluation; test and measurement technology; theory of constraints; overall equipment effectiveness; parts per million (ppm) to achieve zero defects; value analysis; benchmarking; analysis of motion/ergonomics and error-proofing. Ensure that personnel applying these tools are competent and trained Use SPC, new material, tooling, equipment or technology to control and reduce variation in product characteristics and process parameters. Document improvements in drawings, FMEA, control plans, work instructions, etc., and update PPAP. Performance indicators to measure the effectiveness of the continual improvement process may include – quality objectives being met sooner than planned;; achieving and exceeding business and quality objectives; improved efficiency in use of resources; cost reduction; improved product quality; increased Cpk’s; etc.  

Clause 10.3.1 Continual improvement

In addition to the requirement given in Clause 10.3 Continual improvement , Clause 10.3.1 requires that the organization to have a documented process for continual improvement which shall include identification of the methodology used, objectives, measurement, effectiveness, and documented information; a manufacturing process improvement action plan with emphasis on the reduction of process variation and waste; and risk analysis (such as FMEA). Continual improvement is implemented once manufacturing processes are statistically capable and stable or when product characteristics are predictable and meet customer requirements

Continual improvement is implemented once manufacturing processes are statistically capable and stable or when product characteristics are predictable and meet customer requirements. “Statistically capable” refers to processes that have achieved a level of capability where the variation in their outputs is within acceptable limits. This means that the process is predictable and can consistently produce products that meet specifications. “Stable” processes are those that exhibit consistent and predictable behavior over time, with minimal variability. Stability indicates that the process is under control and not subject to significant fluctuations. In essence, the organization should focus on establishing a solid foundation of stable and capable processes, ensuring that products consistently meet customer requirements. Once this foundation is established, the organization can then shift its focus to continual improvement, seeking ways to further optimize processes, enhance product quality, reduce waste, and achieve higher levels of efficiency and customer satisfaction.This approach aligns with the principles of quality management, including those outlined in IATF 16949, and emphasizes the importance of basing improvement efforts on a solid understanding of process capability, stability, and customer needs. It ensures that improvements are built upon a strong and reliable manufacturing foundation, leading to sustainable and meaningful enhancements.

Having a documented process for continual improvement is a fundamental aspect of an effective quality management system, aligned with standards such as IATF 16949. This process provides a structured framework for identifying, prioritizing, implementing, and evaluating improvements across various aspects of the organization. Here’s how you can establish a documented process for continual improvement:

  1. Process Definition and Scope: Clearly define the scope of the continual improvement process. Determine which areas, processes, and functions within the organization will be subject to improvement efforts.
  2. Leadership Commitment: Obtain commitment and support from top management to ensure that the organization is dedicated to driving continual improvement as a core value.
  3. Cross-Functional Teams: Establish cross-functional improvement teams that include representatives from different departments. These teams will collaborate on identifying and implementing improvement opportunities.
  4. Identification of Improvement Opportunities: Develop a systematic approach for identifying improvement opportunities. This could involve analyzing customer feedback, performance metrics, audits, internal assessments, and benchmarking.
  5. Prioritization and Selection: Evaluate and prioritize the identified improvement opportunities based on factors such as potential impact, feasibility, resource availability, and alignment with strategic goals.
  6. Action Planning: Create detailed action plans for selected improvement initiatives. Specify objectives, strategies, timelines, responsibilities, and required resources for each improvement project.
  7. Implementation and Execution: Execute the action plans, making necessary changes to processes, procedures, or systems. Engage the relevant teams and stakeholders to ensure smooth implementation.
  8. Monitoring and Measurement: Establish key performance indicators (KPIs) to measure the progress and effectiveness of improvement initiatives. Regularly monitor and measure results against established targets.
  9. Review and Evaluation: Conduct periodic reviews of improvement projects to assess their outcomes, identify any deviations, and determine if the desired improvements have been achieved.
  10. Learning and Knowledge Sharing: Encourage a culture of learning and knowledge sharing within the organization. Ensure that insights gained from improvement projects are communicated across teams and departments.
  11. Documentation and Records: Document all aspects of the continual improvement process, including improvement plans, actions taken, results achieved, lessons learned, and any changes made.
  12. Feedback Mechanisms: Establish mechanisms for collecting feedback from employees, customers, and other stakeholders on improvement initiatives. Use feedback to refine processes and drive further enhancements.
  13. Training and Skill Development: Provide training to employees involved in the continual improvement process. Equip them with problem-solving skills, data analysis techniques, and tools for process enhancement.
  14. Integration with Quality Management System: Integrate the continual improvement process with the organization’s overall quality management system. Ensure alignment with other processes such as corrective action, preventive action, and risk management.
  15. Communication and Reporting: Communicate the progress and results of improvement initiatives to all relevant stakeholders. Share success stories and lessons learned to inspire and motivate the organization.

Having a documented process for continual improvement demonstrates your organization’s commitment to achieving excellence, driving innovation, and delivering value to customers. This structured approach helps foster a culture of continuous learning and enhancement, ultimately leading to sustained growth and improved competitiveness.

Methodology for Continual improvement

The process for the identification of the methodology used, objectives, measurement, effectiveness, and documented information in the context of continual improvement is a structured approach to ensure that improvement initiatives are well-defined, measurable, and result in meaningful enhancements. Determine the methodology or approach to be used for the specific improvement initiative. This could include established methodologies like Six Sigma, Lean, PDCA (Plan-Do-Check-Act), DMAIC (Define-Measure-Analyze-Improve-Control), or other suitable frameworks. Define clear and specific objectives for the improvement initiative. What do you aim to achieve through this improvement? Objectives should be aligned with the organization’s strategic goals and customer requirements. Establish measurement criteria or key performance indicators (KPIs) that will be used to assess the success of the improvement initiative. These criteria should be quantifiable, measurable, and relevant to the objectives. Determine how the effectiveness of the improvement initiative will be evaluated. This could involve assessing factors such as cost reduction, cycle time improvement, defect reduction, customer satisfaction enhancement, etc. Create and maintain documented information related to the improvement initiative. This includes action plans, process maps, data analysis, reports, and any other relevant documentation. Involve cross-functional teams in the process to ensure diverse perspectives and expertise. Collaborate with different departments to gather insights and input. Develop a detailed action plan outlining the steps, responsibilities, timelines, and resources required to implement the improvement initiative. Execute the action plan according to the defined methodology. Monitor progress and ensure that tasks are carried out as planned. Collect relevant data and measurements according to the established measurement criteria and KPIs. Use data analysis tools to assess the current state and identify areas for improvement. Evaluate the effectiveness of the improvement initiative based on the measurement criteria. Compare the results to the objectives to determine the level of success achieved. Document the results of the improvement initiative, including before-and-after data, analysis findings, lessons learned, and any challenges encountered. Encourage a culture of continuous learning by sharing insights and experiences gained from the improvement initiative. Use this knowledge to inform future improvement projects. Review the documented information and results with relevant stakeholders. Gather feedback on the process, outcomes, and potential areas for further enhancement. Communicate the outcomes and benefits of the improvement initiative to internal teams and, when applicable, to customers or other external parties. Integrate the methodology identification, objectives, measurement, and effectiveness evaluation into the broader continual improvement process of the organization. By following this process, your organization can ensure that improvement initiatives are well-defined, effectively executed, and result in measurable enhancements that contribute to the organization’s overall goals and success.

Manufacturing process improvement action plan

Incorporating a manufacturing process improvement action plan with a focus on reducing process variation and waste is a proactive step towards enhancing product quality, efficiency, and overall operational excellence. Begin by conducting a thorough assessment of the current manufacturing process. Identify areas where process variation and waste are prominent and affecting product quality, lead times, and resource utilization. Clearly define the objectives of the improvement action plan. Specify the desired outcomes, such as target levels of process variation reduction and waste reduction, and how these align with overall organizational goals. Establish a cross-functional team consisting of experts from different departments, including manufacturing, quality, engineering, and operations. This team will collaborate to design and implement the improvement plan. Collect relevant data on process parameters, variations, defects, and waste generation. Utilize statistical tools and techniques to analyze the data and identify root causes of process variation and waste.Perform a thorough root cause analysis to understand the underlying factors contributing to process variation and waste. Use tools such as fishbone diagrams, Pareto charts, and 5 Whys to identify key factors. Based on the root cause analysis, work with the cross-functional team to redesign and optimize the manufacturing process. Implement changes that reduce sources of variation and waste, and enhance process stability. Implement a system for continuous monitoring and measurement of process parameters, variation levels, and waste generation. Use real-time data to track progress and ensure that improvements are sustained. Standardize the improved process by documenting standard operating procedures (SOPs) and providing training to relevant personnel. Ensure that everyone is aligned with the new process. Integrate quality control measures, such as in-process inspections and quality gates, to detect and address process variations early in the production cycle. Implement waste reduction strategies such as lean principles, 5S (Sort, Set in order, Shine, Standardize, Sustain), and visual management techniques to systematically eliminate waste from the process. Foster a culture of continuous improvement where employees are encouraged to identify and address process variations and waste in their daily work. Provide incentives for innovative ideas and contributions. Establish key performance indicators (KPIs) to measure the success of the improvement action plan. Regularly report progress and outcomes to management and relevant stakeholders. Encourage feedback from employees involved in the process and gather their insights on further improvements. Use lessons learned to refine the process and drive further enhancements. Document all steps of the improvement action plan, including data analysis, root cause findings, process changes, and results achieved. Communicate the plan and outcomes across the organization. Periodically review the effectiveness of the improvement action plan. Make adjustments based on new insights, changing conditions, or shifts in organizational priorities. By implementing a manufacturing process improvement action plan with a focus on reducing process variation and waste, your organization can achieve significant gains in product quality, cost efficiency, and customer satisfaction. This approach aligns with the principles of continual improvement and contributes to the organization’s long-term success.

Risk Analysis:

Integrating risk analysis, such as Failure Mode and Effects Analysis (FMEA), into the continual improvement process enhances your organization’s ability to proactively identify and mitigate potential risks, thereby ensuring more robust and sustainable improvements. As you identify areas for improvement within your processes, products, or systems, also consider potential risks associated with these areas. This can include risks related to process variation, waste, quality, safety, and customer satisfaction.Create a cross-functional team comprising experts from different disciplines. This team will collaborate to perform the FMEA as part of the continual improvement process. Define the scope of the FMEA. Identify the specific process, product, or system that will be analyzed. Clearly outline the boundaries and interfaces of the analysis. List all possible failure modes that could occur within the scope of the analysis. These failure modes represent potential risks or issues that may impact the desired improvement. Evaluate the potential effects or consequences of each identified failure mode. Consider how each failure mode could impact product quality, customer satisfaction, safety, and other critical factors. Assign a severity rating to each failure mode based on the potential impact. Use a predefined scale to rate the severity, with higher ratings indicating more severe consequences. Determine the root causes of each failure mode. Understand why these failure modes might occur and what factors contribute to their occurrence. Assign an occurrence rating to each failure mode to represent the likelihood of its occurrence. Use data, historical records, and expert judgment to determine the likelihood. Assign a detection rating to each failure mode, indicating how likely the failure mode is to be detected before reaching the customer. Lower detection ratings signify a higher likelihood of the failure going undetected. Calculate the Risk Priority Numbers (RPNs) for each failure mode by multiplying severity, occurrence, and detection ratings. Prioritize the failure modes based on their RPNs. Develop mitigation and control measures for high-priority failure modes with elevated RPNs. These measures aim to reduce the severity, occurrence, or improve detection of potential issues. Implement the identified mitigation measures and monitor their effectiveness over time. Adjust and refine the measures as needed based on real-world results. Document the entire FMEA process, including failure modes, severity, occurrence, detection ratings, RPNs, root causes, and mitigation actions taken. Communicate the outcomes and actions to relevant stakeholders. Gather feedback from the team and stakeholders on the effectiveness of the FMEA-based improvements. Use this feedback to refine the continual improvement process and future FMEA analyses Integrate the outcomes of the FMEA into your broader continual improvement initiatives. Use the insights gained from FMEA to guide improvement efforts and ensure that risks are proactively addressed. By incorporating risk analysis like FMEA into the continual improvement process, your organization can identify potential risks early, implement effective control measures, and drive more robust and sustainable improvements. This approach aligns with the principles of quality management and contributes to overall organizational resilience and excellence.

IATF 16949:2016 clause 10.2.6 Customer complaints and field failure test analysis

Customer complaints

In the context of IATF 16949, which is the automotive industry’s quality management standard, the handling of customer complaints is a critical component of the quality management system. Effective management of customer complaints helps organizations address issues, improve products and processes, and enhance customer satisfaction. Here’s how customer complaints are typically managed in compliance with IATF 16949. Establish a formal process for receiving and registering customer complaints. Designate responsible personnel to handle complaint intake and ensure that all relevant information is accurately documented. Classify and categorize customer complaints based on factors such as the nature of the issue, product or service involved, severity, and potential impact on safety and quality. Initiate a comprehensive investigation into the root cause of the complaint. Use methodologies such as the 8D (Eight Disciplines) problem-solving process or other recognized problem-solving approaches. Involve cross-functional teams, including engineering, quality, manufacturing, and relevant stakeholders, to collaboratively analyze the issue and identify its underlying causes. Implement immediate corrective actions to address the identified root cause and prevent recurrence of the issue. Take steps to contain and mitigate the impact of the problem. Maintain open and transparent communication with the customer throughout the investigation and resolution process. Provide timely updates on the progress of corrective actions. If the complaint reveals a design flaw or manufacturing process issue, provide feedback to design and production teams to drive improvements. Develop and implement long-term corrective actions to address systemic issues and prevent similar complaints from occurring in the future. Monitor the effectiveness of implemented corrective actions through ongoing verification and validation. Ensure that the issue has been fully resolved and no further recurrence is observed. Maintain detailed documentation of the entire complaint handling process, including complaint details, investigation findings, corrective actions, and verification results. Analyze customer complaint data to identify trends, patterns, and common issues. Use this analysis to drive continuous improvement efforts. Implement mechanisms to capture customer feedback, whether positive or negative, and integrate this feedback into product development and improvement processes.
Training and Awareness: Provide training to employees involved in handling customer complaints to ensure they are equipped with the necessary skills and knowledge. Include customer complaint management as part of your organization’s internal audit process to verify compliance with IATF 16949 requirements. Effectively managing customer complaints in accordance with IATF 16949 not only ensures compliance with quality standards but also contributes to improved product quality, customer satisfaction, and overall business success in the automotive industry.

Field Failure Analysis

Field failure analysis involves collaboration between customers and suppliers to analyze returned failed components, particularly those with no-fault found reports. It is suitable for the entire supply chain, including original equipment manufacturers (OEM) and suppliers. It provides discrete steps, defined procedures, and a clear allocation of responsibilities.

When a component failure occurs ‘in the field,’ the defective part is replaced, and the manufacturer (OEM) or supplier may request their return to allow analysis. There is often no fault found during the analysis. In the past, suppliers or the OEM took no further action. But that had to change. The objective of the FFA was to establish the reason for removing a field failed part from a vehicle, identify the root cause of the failure, and implement corrective actions. A subsequent process was designed to address the no-fault-found parts once they trigger an agreed threshold.

The steps of the FFA process

Field failure analysis follows an escalating test philosophy to ensure a rigorous analysis, but one that can be economically justified. 

Phase #1: Part analysis

The FFA process begins upon receipt of a defective part from the field and involves a part analysis. The part analysis takes place in three discrete steps: standard tests, complaint evaluation, and tests under load.

1a: Standard tests

The standard test begins with a comprehensive visual inspection before testing the component at ambient temperatures and in an environment that mirrors in-service conditions, including loads. All tests must follow accepted test methods and use unambiguous test criteria, with all normal tests fully completed. Identification of a fault is not a reason to prematurely cease testing. The person carrying out the tests should carefully document the process, findings, and observations. Standard and load tests should not damage the inspected part so it can be re-installed into the machine for further testing. This is why destructive testing is avoided (unless the supplier and the customer specifically agree to it).

1.b: Failure complaint evaluation

An explicit fault description must accompany any part received from the field. After completing the standard test, the testing team will evaluate the complaint or fault description, and any faults found are checked for plausibility against the customer’s complaint. If the fault is proven, the FFA process proceeds to a problem-solving process. If no faults are detected or the failure found does not align with the original complaint, the testing team will plan additional specific load tests to elicit the original failure. 

1.c: Load tests

Proceeding to the load testing phase assumes the standard tests found no fault, or the identified faults do not accord with the customer complaint. The test plan must be agreed on, based on the component design parameters and requirements specifications. It must ensure the testing is equivalent to the environmental and in-operation conditions, including humidity, speeds, voltages, and physical loads. Extra load parameters should form part of the test plan to evoke the defects highlighted in the complaint.If a fault is proven, the FFA process proceeds to a product-handling procedure, with the component tagged as ‘not in order,’ or N.I.O. Tagging a part as N.I.O does not mean the customer complaint is proven, simply that testing found a fault. Those parts with no proven faults are tagged as ‘in order,’ or I.O., based on part analysis. The I.O. tag does not mean the part is serviceable, simply that testing failed to elicit a fault. If the triggering criteria are met for parts tagged as I.O., the defective part proceeds to the NTF or ‘no trouble found’ phase; if not, the part passes to a product handling procedure.Completing the load tests is an inflection point, determining whether the component proceeds to a product handling procedure as an individual failure or triggers the NTF process to investigate possible systemic or process defects due to the number or importance of the failures.The NTF trigger is an agreed metric between the parties involved, usually the supplier and OEM. It may be an agreed threshold of parts reported as ‘in order’ following the part analysis, based on the number of complaints received from a customer or any faults that arise from a new component or product launch.

Phase #2: The NTF process

The NTF process comprises of three distinct but highly iterative stages, making it incorrect to consider them a linear progression. Instead, you should look at them as three corners of a triangle, in the center of which lies the answer.

2a: Data collection and evaluation

The data collection and evaluation stage involves each party in the NTF process carrying out data collection and evaluation related to their area of responsibility. This stage is wide-ranging and may include failure databases, the geographic specificity of failure, service and repair data, or production process information. Evaluation techniques can include statistical analysis, equipment history, correlations between failure rates and production changes, or failures due to mileage.

2b: System tests

The system tests are a more comprehensive and wider-ranging version of the load tests. They may include involvement from outside testing laboratories, aging tests, functional tests under varying loads, or tests in the vehicle producing the problems. Rather than focus solely on the component, the system tests investigate relationships and active connections within the system.

2c: Process study

The process study investigates interface issues between organizations and systemic problems in programming, diagnosis, or equipment and parts manuals. It may also investigate possible influences from peripheral components that might impact the failure, such as seals, hoses, clamps, or electrical connectors. If the NTF process has failed to identify an issue, all parties must agree whether to continue with further analysis or document and conclude the process. If the NTF process identifies the problem, the FFA proceeds to the problem-solving stage.

Phase #3: Problem solving

The automobile industry uses the  8D method for sustainable problem-solving. However, for this stage, an organization may use any of the root cause analysis process they are familiar with.The 8D method uses the following steps:

  1. Assemble the team
  2. Describe the problem
  3. Contain the problem – isolate from client impact
  4. Carry out a root cause analysis
  5. Plan permanent corrective actions
  6. Implement and validate permanent corrective actions
  7. Prevent recurrence
  8. Recognize team contributions

The OEM will usually initiate the FFA process, as they will receive the warranty returns or defective parts. However, the supplier should begin their part of the process the moment they receive defective parts into their stores’ system. Under the FFA procedure, a supplier has strict responsibilities to quarantine, document, track, and preserve the defective parts to ensure a rigorous chain of custody.The FFA procedure is collaborative and depends on the context of the failure, with all parties agreeing on who will lead the process and who will take part, as well as the responsibilities and deliverables of all parties. Much of the process will require the involvement of all parties, with one nominated to take the lead. The data management and collection process underpins the integrity of the FFA process. This process revolves around two principles:

  1. The first is a rigorous and documented chain of custody and evidence preservation management system that starts upon receipt of the part from the field and does not end until its disposal. 
  2. The second principle is that once a part failure is found, at any point in the FFA process, it is declared defective regardless of whether the failure can or cannot be reproduced.

To ensure a meaningful analysis, upon receipt of a defective part, the supplier or OEM must implement a system of traceability that will follow the part through the FFA and which may be examined at any point to understand the component’s test status. The item must be reliably marked and retained in a quarantine area to prevent it from being returned to service, sold, or destroyed. To prevent affecting the investigation process, the supplier or OEM must exercise care not to clean, modify, or damage the component, unless such cleaning or modification forms part of the formal FFA procedure agreed upon by the parties.The outlined field failure analysis process provides a useful template for other industries and manufacturers who wish to stop spending money on warranty returns and product recalls. Implementing such a strictly controlled and documented process removes subjectivity and external influence from product investigations and testing. It provides a scientific and systematic approach for applying corrective actions and building quality into existing products.

10.2.6 Customer complaints and field failure test analysis

The organization are to perform analysis on customer complaints and field failures, including any returned parts, and initiate problem solving and corrective action to prevent recurrence. Where requested by the customer, this shall include analysis of the interaction of embedded software of the organization’s product within the system of the final customer’s product. The organization shall communicate the results of testing/analysis to the customer and also within the organization.

Customer Complaint

Performing analysis on customer complaints and initiating problem-solving and corrective action is a fundamental practice in quality management, especially in compliance with standards like IATF 16949. This approach helps organizations identify the root causes of issues, implement effective solutions, and prevent the recurrence of similar problems. Here’s how the process can be structured:

  1. Complaint Analysis: Collect and gather detailed information about the customer complaint. This includes the nature of the issue, product details, circumstances, and any supporting evidence provided by the customer.
  2. Problem Identification: Analyze the complaint data to identify the specific problem or nonconformity that led to the customer’s concern. Determine whether the issue is related to design, production, service, or other aspects.
  3. Root Cause Analysis: Perform a thorough root cause analysis using appropriate methodologies such as the 5 Whys, Fishbone diagrams (Ishikawa), Fault Tree Analysis, or Failure Mode and Effects Analysis (FMEA). Identify the underlying factors contributing to the issue.
  4. Cross-Functional Collaboration: Involve relevant teams and departments, such as engineering, manufacturing, quality, and customer service, in the analysis process. Collaborative efforts enhance the accuracy of root cause identification.
  5. Immediate Corrective Action: Implement immediate corrective actions to address the identified root cause and prevent the issue from affecting other products or processes. This may involve containment measures to prevent further occurrences.
  6. Long-Term Corrective Action: Develop and implement long-term corrective actions that address systemic issues and prevent the recurrence of similar problems. Focus on process improvements, design enhancements, or training.
  7. Validation and Testing: Verify the effectiveness of the corrective actions through testing or validation. Ensure that the implemented solutions successfully address the root cause and produce the desired results.
  8. Documentation and Reporting: Document the entire analysis process, root cause findings, corrective actions taken, and their outcomes. Maintain accurate records for future reference, audits, and continuous improvement.
  9. Communication with Customer: Maintain transparent and consistent communication with the customer. Provide updates on the analysis progress, actions taken, and expected resolutions. Seek their feedback and input.
  10. Feedback Loop: Integrate the lessons learned from the analysis and corrective action into your organization’s continuous improvement efforts. Apply insights to enhance product design, manufacturing processes, and overall quality.
  11. Training and Skill Development: Ensure that employees involved in the analysis and corrective action process have the necessary skills and training to perform effective root cause analysis and problem-solving.
  12. Audit and Verification: By systematically performing analysis on customer complaints, identifying root causes, and implementing appropriate corrective actions, the organization not only resolves immediate concerns but also strengthens its overall quality management system. This approach aligns with the principles of IATF 16949 and contributes to delivering high-quality products and services that meet or exceed customer expectations.

Field failure

Performing analysis on field failure test results and initiating problem-solving and corrective action is a crucial aspect of quality management and continuous improvement, particularly in industries like automotive where product reliability and safety are paramount. Here’s how the process can be structured:

  1. Field Failure Test Analysis: Gather data and information from field failure tests, which involve real-world conditions and usage scenarios. This data can include failure rates, patterns, and other relevant information.
  2. Data Collection and Segmentation: Collect detailed data on field failures, including product details, failure modes, locations, and environmental conditions. Segment the data based on factors such as product models, batches, or geographical regions.
  3. Root Cause Analysis: Analyze the field failure data to identify root causes of the failures. Use techniques like the 5 Whys, Fishbone diagrams, or Failure Mode and Effects Analysis (FMEA) to determine underlying factors.
  4. Cross-Functional Collaboration: Collaborate across departments, including engineering, manufacturing, quality, and testing, to collectively analyze the field failure data. Different perspectives can lead to more accurate root cause identification.
  5. Immediate Corrective Action: Implement immediate corrective actions to address the root causes of identified failures. Focus on containing and preventing further occurrences of the same issues.
  6. Long-Term Corrective Action: Develop and implement long-term corrective actions that address systemic issues to prevent recurrence. This may involve design changes, process improvements, or material upgrades.
  7. Validation and Testing: Verify the effectiveness of corrective actions through additional testing or validation. Ensure that the implemented solutions successfully resolve the identified root causes.
  8. Documentation and Reporting: Document the entire analysis process, root cause findings, corrective actions taken, and validation results. Maintain accurate records for future reference and audits.
  9. Communication and Reporting: Communicate the analysis findings, corrective actions, and outcomes to relevant stakeholders, including management, design teams, and customers if necessary.
  10. Feedback Loop and Continuous Improvement: Incorporate the lessons learned from the field failure test analysis into your organization’s continuous improvement efforts. Use insights to drive enhancements in product design, manufacturing, and quality control.
  11. Training and Skill Development: Provide training to employees involved in the analysis and corrective action process to ensure they have the necessary skills for effective root cause analysis and problem-solving.
  12. Audit and Verification: Include the field failure test analysis and corrective action process in your internal audit program to verify compliance with quality standards and the organization’s quality management system.

By systematically analyzing field failure test results, identifying root causes, and implementing appropriate corrective actions, the organization can enhance product reliability, safety, and customer satisfaction. This approach aligns with the principles of quality management and contributes to delivering products that meet or exceed customer expectations in industries like automotive.

Embedded Software

When requested by the customer, conducting an analysis of the interaction of embedded software of the organization’s product within the system of the final customer’s product is an important step to ensure that the software functions correctly and seamlessly within the larger context. This is particularly relevant in industries where embedded software plays a critical role in the operation, functionality, and safety of products. Here’s how you can approach this requirement:

  1. Customer Requirement Clarification: Clearly understand and document the specific customer request for analyzing the interaction of embedded software within the final customer’s product system. Seek clarification if needed to ensure alignment.
  2. Identify Scope and Boundaries:Define the scope of the analysis, including the embedded software components to be considered, the final customer’s product system, and any specific interfaces or interactions to be studied.
  3. Collaboration with Customer: Establish open communication and collaboration with the customer to gather relevant information about the final product system, its requirements, interfaces, and expected behavior.
  4. Embedded Software Assessment: Evaluate the embedded software components in terms of their compatibility, functionality, and performance within the larger system. Identify any potential points of interaction, integration challenges, or areas of concern.
  5. Interoperability Testing: Perform interoperability testing to verify that the embedded software interacts as intended with other components of the final customer’s product system. This may involve functional, interface, and performance testing.
  6. Risk Assessment: Assess the potential risks associated with the embedded software’s interaction within the larger system. Identify any potential vulnerabilities, compatibility issues, or risks that may arise.
  7. Root Cause Analysis: If any issues or discrepancies are identified, conduct root cause analysis to determine the underlying causes of the problems. This may involve analyzing software code, system configurations, or communication protocols.
  8. Corrective and Preventive Actions: Develop and implement corrective actions to address any identified issues or risks. Focus on ensuring that the embedded software performs optimally within the final product system.
  9. Validation and Verification:Validate the effectiveness of corrective actions through testing and verification. Ensure that the embedded software’s interaction with the larger system has been improved and optimized
  10. Documentation and Reporting: Document the entire analysis process, including findings, actions taken, testing results, and validation outcomes. Maintain detailed records for future reference and audits.
  11. Customer Communication: Provide the customer with comprehensive reports and updates on the analysis process, outcomes, and any actions taken to enhance the embedded software’s performance within their product system.
  12. Continuous Improvement: Incorporate the insights gained from the analysis into your organization’s continuous improvement efforts. Apply lessons learned to enhance software design, development processes, and system integration.
  13. Training and Expertise:Ensure that the team members responsible for conducting the analysis have the necessary expertise in software engineering, system integration, and quality management.

By conducting a thorough analysis of the interaction of embedded software within the final customer’s product system, the organization demonstrates its commitment to meeting customer requirements and ensuring the reliable and effective operation of its products. This approach aligns with the principles of quality management and customer satisfaction, contributing to the organization’s success in delivering high-quality products with embedded software components.

Communicate the results of testing/analysis

Communicating the results of testing and analysis to both the customer and within the organization is a crucial aspect of quality management and transparency. Effective communication ensures that all relevant stakeholders are informed about the outcomes, allowing for informed decision-making, continuous improvement, and alignment with customer expectations. Here’s how you can approach this communication process:

  1. Customer Communication: Prepare a comprehensive report detailing the results of the testing or analysis. The report should include clear and concise information about the purpose of the analysis, methodologies used, findings, any identified issues, corrective actions taken, and validation outcomes. Tailor the communication to the customer’s needs and preferences. Provide the information in a format that is easily understandable and relevant to the customer’s perspective.
  2. Timely Updates: Communicate the results in a timely manner to ensure that the customer is promptly informed about the outcomes. Timeliness is especially important if the analysis reveals any critical issues that may impact the customer’s operations or decisions.
  3. Transparency and Accuracy: Be transparent and honest in presenting the results, both positive and negative. Accurate reporting builds trust and credibility with the customer.
  4. Clarification and Q&A: Be available to address any questions or concerns the customer may have about the testing/analysis results. Provide clarification as needed to ensure that the customer fully understands the findings and implications.
  5. Documentation Sharing: Provide the customer with copies of the detailed analysis report, as well as any relevant documentation such as test protocols, validation results, and corrective action plans.
  6. Internal Communication: Within the organization, disseminate the results of testing/analysis to relevant departments and teams. This ensures that all stakeholders are aware of the outcomes and can contribute to follow-up actions and improvements.
  7. Cross-Functional Collaboration: Engage cross-functional teams within the organization to share the results. This may involve quality, engineering, manufacturing, design, and other relevant departments.
  8. Learning and Improvement: Encourage internal discussions and meetings to review the results and identify opportunities for improvement. Use the insights gained from the analysis to drive continuous improvement efforts.
  9. Action Plans: Develop action plans based on the analysis results, whether it involves implementing corrective actions, making design improvements, or enhancing testing methodologies.
  10. Training and Awareness: Provide training and awareness sessions within the organization to ensure that employees understand the significance of the analysis results and their role in implementing follow-up actions.
  11. Documentation and Records: Maintain accurate and well-organized records of the testing/analysis results, communication with the customer, and internal discussions. Proper documentation supports accountability and auditability.
  12. Feedback Loop: Establish a feedback loop with the customer to ensure that they are satisfied with the communication and that the results align with their expectations.

By effectively communicating the results of testing and analysis both to the customer and within the organization, you demonstrate a commitment to transparency, accountability, and continuous improvement. This approach aligns with the principles of quality management and contributes to building strong relationships with customers and internal stakeholders.

IATF 16949:2016 Clause 10.2.5 Warranty management systems

Warranty is a statement of assurance or undertaking issued by the manufacturer of a product concerning the performance of the product and parts supplied by him by way of sale transaction to the customer, for a certain period as stated in the Warranty Card accompanying the product. In other words, it is a performance guarantee for the product given by the manufacturer. In case of any poor performance due to the nonperformance of any part or defect in any part of the product, will be made good by the supplier/manufacturer with either replacement of the part or product or repair of the product.Warranty Management is today a function of Service Parts Management Stream in the organization.Service Parts Management Teams and structure are the Service Support Delivery owners and function as primary contact points with the customer. At the first level Service, support teams comprise of Customer Desk, which is the first point contact for the customers to register the service request. Technicians and Engineers as front end site supports and second point contacts to the customers. Parts Support Managers oversee the functioning of the operations and take responsibility to close calls and for delivery performance.

Warranty Management and Claims Processing System are driven by the Internet and e-commerce enabled system application that generally consists of the following modules:

  • Service Warranty Database and Tracker (Database information uploaded from Sales Module)
  • Service Request Registration, authorization, service job ticket issue, job ticket closure & Report functions.
  • Parts procurement request, parts issue authorization
  • Parts Inventory Management, Purchase Order Management, Repair Management, Vendor Management etc.

Parts Procurement and Logistics may be handled by a single department or by separate teams depending upon the volume of business and the management structure. These functions manage parts procurement functions, inbound logistics, parts warehousing and distribution on the outbound cycle. Reverse Logistics functions managed by the team involve – Parts collection, parts segregation, inventory holding of defective parts, parts repair, warranty replacement with OE manufacturer, Re-Export, and Waste disposal or Scrapping functions.

IPTV (en. Incidents Per Thousand Vehicles)
Also known as the C1000. This indicator determines the number of problems reported by final customers visiting the dealer stations. This does not automatically mean the replacement of components. In this case dealer can for example, only update the software, lubricate interface elements, or perform their additional tightening. The starting point for defining the above-mentioned indicator is a joint work which is carried out by the customer’s engineering and the organization of the reliability plan, which corresponds to the implementation of APQP point 1.4. Product and process assumptions.

TF (Technical Factor – %)
Defines the percentage share of the organization’s financial responsibility for the parts replaced by the dealer that are covered by the warranty period. From the supplier’s point of view, this is a key indicator that directly translates into poor quality costs. Defining its value should start immediately after SOP (start of production) with the analysis of the first parts. Usually the first meetings with clients regarding defining of Technical Factor take place a few months after the project is launched. In this case is already analyzed several dozen parts (of course, it can be more which is more advantageous for the organization). To the final evaluation is taken into account:

  • the number of parts analyzed in a given period
  • number of parts for which a defect can be assigned for supplier responsibility
  • the number of parts for which no defect from final customer has been confirmed

It is also worth remembering that TF is not an indicator that is defined once. Depending on the achieved quality performance, this value may be reduced (in case of actions implemented by the organization for identified defects) or increased. Second case will be related to chronic problem assigned to the process, subcomponents or design. Such issue should be analysed in automotive warranty management activity by organization with big attention.

NTF (No trouble found)
No confirmation of the defect. The term used when analyzing warranty returns, for which, after performing standard tests, no defects indicated by the end user were found. Depending on the customer (OEM) additional requirements related to NTF may be applied.

Months in Service (MIS)
Defined as the time period during which the vehicle is used by the end customer. The usual assumption is that 30 days of use is equivalent to the index unit (30 days = 1.0 MIS). When working with a Ford customer, it is worth remembering that it is referred to as TIS: Time-In-Service. The most popular time periods are: 3 months (3 MIS) when the incident occurred, counting from the date of vehicle purchase by the end customer until issue reporting to the dealer, 12 (12 MIS) and 24 (24 MIS) months. Nevertheless, customers such as VW, GM and Ford give the opportunity to see the performance even for one month.

In summary, each person in the organization who is responsible for managing warranty returns should familiarize themselves with the above terms in order to correctly understand the links between them and identify their impact on the potential financial invoices.

10.2.5 Warranty management systems

When the organization is required to provide warranty for their product, the organization must implement a warranty management process. The organization shall include in the process a method for warranty part analysis, including NTF (no trouble found). When specified by the customer, the organization shall implement the required warranty management process.

An automotive warranty management system is a structured and systematic approach used by automotive manufacturers, suppliers, and service providers to manage warranty-related processes, claims, data, and customer interactions. This system helps ensure that products meet quality standards, provides effective customer support, and addresses warranty claims efficiently. Here are the key components and functions of an automotive warranty management system:

  1. Warranty Policy and Guidelines: Define the warranty terms, conditions, and coverage for automotive products. Establish guidelines for handling warranty claims, including what is covered, claim submission deadlines, and customer responsibilities.
  2. Claims Management: Receive, process, and manage warranty claims from customers, dealerships, or other stakeholders. Verify claim validity, assess the nature of defects, and determine whether the claim is eligible for reimbursement or repair.
  3. Data Collection and Analysis: Collect and analyze warranty data to identify trends, patterns, and recurring issues. Use data insights to improve product design, manufacturing processes, and overall quality.
  4. Supplier Collaboration: Collaborate with suppliers to track and address issues related to defective components or materials. Work together to identify root causes and implement corrective actions.
  5. Customer Support: Provide efficient and responsive customer support for warranty-related inquiries, claims submission, and dispute resolution. Keep customers informed about the status of their claims.
  6. Documentation and Records: Maintain detailed records of warranty claims, including customer information, product details, defect descriptions, repair actions, and claim outcomes.
  7. Repair and Replacement: Coordinate and manage repair or replacement activities for defective products. Ensure that repairs are performed correctly and that replacement parts meet quality standards.
  8. Communication Channels: Establish clear communication channels for customers, dealerships, and other stakeholders to submit warranty claims, seek assistance, and provide feedback.
  9. Escalation and Approval: Define escalation procedures for handling complex or high-value warranty claims. Ensure proper authorization and approval processes for claims resolution.
  10. Reporting and Analytics: Generate reports and dashboards that provide insights into warranty trends, costs, claim processing times, and other key performance indicators.
  11. Continuous Improvement: Use warranty data and analysis to drive continuous improvement in product design, manufacturing processes, and quality control.
  12. Regulatory Compliance: Ensure compliance with relevant automotive industry regulations, standards, and laws related to warranty coverage, consumer protection, and data privacy.
  13. Integration with Quality Management: Integrate the warranty management system with quality management processes, such as root cause analysis, corrective and preventive actions, and process improvement initiatives.

An effective automotive warranty management system helps organizations enhance customer satisfaction, reduce warranty costs, identify quality improvement opportunities, and strengthen their reputation within the automotive industry. It aligns with the principles of quality management, customer focus, and continuous improvement, supporting the organization’s commitment to delivering reliable and high-quality automotive products and services.

Warranty part Analysis

Including warranty part analysis in the automotive warranty management process is essential for identifying root causes of product failures, improving product quality, and effectively managing warranty claims. This analysis involves a systematic examination of warranty-related data, including information about defective parts, to gain insights into the underlying issues that lead to warranty claims. Here’s how you can integrate warranty part analysis into your warranty management process:

  1. Data Collection and Aggregation: Gather comprehensive data on warranty claims, including information about the specific parts or components that have experienced failures. Collect data related to part numbers, quantities, failure descriptions, dates of failure, and customer feedback.
  2. Categorization and Segmentation: Categorize warranty claims based on the type of parts or components that are failing. Segregate claims by part number, product model, production batch, or any other relevant criteria.
  3. Root Cause Analysis: Perform thorough root cause analysis on each category of warranty parts. Use techniques such as the 5 Whys, Fishbone diagrams (Ishikawa), Failure Mode and Effects Analysis (FMEA), and statistical analysis to identify underlying causes of failures.
  4. Trend Identification: Analyze warranty part data to identify trends and patterns across different parts or components. Look for common failure modes, recurrence rates, and potential systemic issues.
  5. Supplier Involvement: Collaborate with suppliers to investigate and understand potential defects in the supplied parts. Share warranty data and analysis to jointly identify root causes and implement corrective actions.
  6. Quality Improvement: Based on the analysis, develop and implement corrective and preventive actions to address the identified root causes. Improve part design, manufacturing processes, quality control, and materials to prevent future failures.
  7. Feedback Loop: Establish a feedback loop between warranty part analysis and the product development process. Share insights from warranty data with design and engineering teams to inform design improvements and modifications.
  8. Documentation and Reporting: Document the findings of warranty part analysis, including root causes, corrective actions, and outcomes. Generate reports and dashboards that communicate trends and improvement initiatives.
  9. Continuous Monitoring: Continuously monitor the effectiveness of implemented corrective actions. Track changes in warranty part performance over time to ensure sustained improvement.
  10. Knowledge Sharing: Share insights and lessons learned from warranty part analysis across the organization. Disseminate information to relevant departments, including manufacturing, engineering, and quality assurance.
  11. Training and Skill Development: Provide training to employees involved in warranty part analysis to ensure they have the necessary skills and knowledge to perform effective root cause analysis and data interpretation.
  12. Alignment with Quality Objectives: Ensure that the outcomes of warranty part analysis align with the organization’s quality objectives and contribute to the overall improvement of product quality and customer satisfaction.

By integrating warranty part analysis into the warranty management process, the organization can proactively address product failures, reduce warranty costs, enhance customer satisfaction, and drive continuous improvement in product design and manufacturing processes. This approach supports the organization’s commitment to delivering high-quality automotive products and services and aligns with the principles of IATF 16949 and other quality standards.

NTF (No Trouble Found)

Incorporating NTF (No Trouble Found) cases into the automotive warranty management process is important to effectively handle warranty claims where no identifiable defects or issues are found with the claimed product. Handling NTF cases systematically helps maintain customer satisfaction, reduce unnecessary costs, and improve the overall warranty management process. Here’s how you can include NTF cases in your warranty management process:

  1. Clearly identify and segregate warranty claims that fall under the category of NTF. These are cases where no actual defects or issues are identified upon investigation.
  2. Thoroughly document the customer’s complaint and description of the issue. Capture all relevant details to ensure a complete understanding of the reported problem.
  3. Conduct an initial assessment of the claimed product to verify the reported issue. This may involve basic inspections, tests, or diagnostics to determine if any visible or obvious problems exist.
  4. Maintain open and transparent communication with the customer throughout the process. Inform them of the assessment results and the steps being taken to investigate further.
  5. If no immediate issues are found, conduct a more detailed analysis involving specialized testing, diagnostics, or collaboration with relevant technical teams.
  6. Apply root cause analysis methodologies to explore potential underlying causes of the reported issue, even if they are not immediately apparent.
  7. If applicable, involve suppliers in the investigation to ensure that the reported issue is not related to components or parts they have supplied.
  8. Perform validation testing under various conditions to replicate the reported problem and determine if it can be reproduced.
  9. If the claimed product was initially assessed by a dealership or service center, collaborate with the Dealerships to gather additional insights and observations.
  10. Provide feedback and training to service technicians, dealerships, and personnel involved in diagnosing and handling NTF cases. Enhance their ability to accurately assess and address such situations.
  11. Maintain detailed records of NTF cases, including the steps taken, analysis results, customer interactions, and any follow-up actions.
  12. Educate customers about the possibility of NTF cases, explaining that complex systems may exhibit intermittent or difficult-to-replicate issues.
  13. In cases where no defects are found, consider offering compensation, discounts, or additional support to the customer to maintain a positive relationship.
  14. Analyze NTF cases as part of your overall warranty data analysis to identify trends, improve diagnostic processes, and enhance overall product quality.

By including NTF cases in your warranty management process, you demonstrate a customer-focused approach to addressing reported issues, even when no immediate defects are found. This approach contributes to improved customer satisfaction, efficient use of resources, and ongoing enhancement of your organization’s quality management efforts in the automotive industry.

Customer specified Warranty Management Process

When a customer specifies specific requirements for the warranty management process, it is imperative for the organization to implement and adhere to these requirements. This demonstrates a commitment to meeting the customer’s expectations and ensuring that warranty-related processes align with the customer’s preferences. Review and analyze the customer’s specified warranty management process in detail. Ensure a clear understanding of their expectations, guidelines, and any unique requirements they have outlined. Document the customer-specified warranty management process, including all relevant details, instructions, and specific steps to be followed. Map out the customer-specified process and integrate it with your existing warranty management framework, if applicable. Identify points of alignment and potential gaps. Allocate the necessary resources, personnel, and tools to effectively implement the customer-specified process. Ensure that the required skills and competencies are in place. Customize workflows, procedures, and documentation to match the customer’s requirements while ensuring they integrate seamlessly with your overall quality management system.Provide training to employees involved in the warranty management process to ensure they understand and can effectively implement the customer-specified requirements. Facilitate open communication about the new process. Engage in ongoing communication with the customer to clarify any ambiguities, seek clarification, and address any questions or concerns. Validate the implementation of the customer-specified warranty management process through pilot testing or trial runs. Ensure that it functions as intended and produces the desired outcomes. Seek feedback from both internal stakeholders and the customer regarding the implementation of the specified process. Use feedback to identify areas for improvement and make necessary adjustments. Maintain comprehensive documentation of the implementation process, any changes made, and the outcomes achieved. Keep records of any communication with the customer related to the process. Continuously monitor the performance and effectiveness of the customer-specified warranty management process. Track key performance indicators and customer satisfaction metrics. Ensure that the implemented process is audited as part of your quality management system’s internal audit process to verify compliance with the customer’s requirements. Be prepared to adapt and adjust your processes as needed based on the customer’s feedback and changing requirements over time. By diligently implementing the customer-specified warranty management process, the organization not only meets the customer’s expectations but also enhances its reputation as a reliable and customer-focused partner. This approach fosters strong customer relationships, drives customer satisfaction, and contributes to the organization’s success in the automotive industry.