Microsoft Word - 3B_Aguiar_FMEA_Vol 2_Issue1.docx IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 3 Vol. 2, Issue 1, 2010 ISSN 1936-6744 AN AHP APPLICATION TO EVALUATE SCORING CRITERIA FOR FAILURE MODE AND EFFECT ANALYSIS (FMEA) Dimas Campos Aguiar Iochpe Maxion Wheels and Chassis Division Cruzeiro, SP, Brazil E-mail: dimastjs@gmail.com Helder Jose Celani de Souza Fleury Diagnostics Sao Paulo, SP, Brazil E-mail: hcelani@uol.com.br Valerio A. P. Salomon Sao Paulo State University Guaratingueta, SP, Brazil E-mail: salomon@feg.unesp.br ABSTRACT The Failure Mode and Effect Analysis (FMEA) has been developed by several researchers who have proposed distinct reference tables to score the Severity, Occurrence and Detection of a failure. This paper aims to evaluate, using AHP, several proposals for the application of Process FMEA, as well as to obtain and offer recommendations for its application. Five reference tables for Severity, six tables for Occurrence, and six tables for Detection are presented. These reference tables are evaluated for their application in the Brazilian automotive industry by using critical analysis and proper judgments. The scientific contribution of this paper is to provide a way to select a reference table, available in the FMEA literature, for the application of the FMEA process. Keywords: AHP, Process FMEA, Quality Management. 1. Introduction The Failure Mode Effect and Analysis (FMEA) was first applied in 1949 by the U.S. Army. In the 1960s, it was effectively used by National Aeronautics and Space Administration (NASA) in the Apollo Project. As reported by Fernandes and Rebelato (2006) and Sharma et al. (2007), FMEA was developed to identify potential failures in processes by defining its causes and effects. In the 1980’s, FMEA became a reference for processes developments, initially, in the aerospace industry. In 1988, the Ford Motor Company published an instructions manual for Design FMEA and Process FMEA, which had been used in product development and manufacturing processes (Society of Automotive Engineers 2001). The use of the FMEA is considered a key element in quality process planning, according to Stamatis (1995), Palady (1995), Reid (2005) and Teng et al. (2006): the organizations save resources and they have high levels of customer satisfaction when they perform a full application of FMEA. Thus, FMEA is a very powerful method when applied correctly, but, otherwise, does not show its benefits (Devadasan et al. 2003). Assuming that the technique for using FMEA is public and known, details on how to fill the FMEA  Corresponding author Rob Typewritten Text Rob Typewritten Text Rob Typewritten Text http://dx.doi.org/10.13033/ijahp.v2i1.69 Rob Typewritten Text Rob Typewritten Text Rob Typewritten Text Rob Typewritten Text Rob Typewritten Text IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 4 Vol. 2, Issue 1, 2010 ISSN 1936-6744 form and to make the relationship between its columns are not covered in this paper. They can be found in many publications, such as Kmenta and Ishii (2000) and Terninko (2003). The main objective of this paper is to compare different reference tables to score Detection, Occurrence and Severity, identifying them in relevant literatures, and to propose the best combination for Process FMEA application. For this purpose, the Analytic Hierarchy Process (AHP) is applied. In addition, the literature research, theoretical basis, judgments and results analysis, and the conclusions report were also used. 2. Theoretical Basis This section presents a selection of several publications about techniques for FMEA application with focus on reference tables and their scores. Ahsen (2008) affirms that the FMEA helps managers to perform resource allocations more efficiently when oriented by risks prioritization. Thus, relations among the Severity of the failure mode, the frequency at the failure may occur and the probability of Detection failure, the Process FMEA aims to define, demonstrate and improve the engineering solutions in response to costs, maintainability, productivity, quality, and reliability. This section will follow the sequence of a typical FMEA form, that is, Severity, Occurrence, and Detection scorings. Severity. Each failure mode has to be classified according to the impact of its effects. Severity defines this classification with a score ranging from 1 to 10: Score 1 is the least serious and Score 10 is the most serious (Tozzi 2004). Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006) adopted a criterion in which two aspects of high subjectivity can be noted. The first aspect is the use of customer dissatisfaction as a comparison standard. Each failure mode considered in FMEA must have a potential effect that causes customer dissatisfaction. The Severity score will indicate the degree of such dissatisfaction. Another aspect is the parameterization of more than one value for the same situation, which can contribute to a less accurate scoring. This proposal (S1) is showed in the Table 1. Table 1 (S1) Severity scoring based on customer satisfaction (Adapted from Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006)) Severity Score The customer will probably not notice 1 Slightly annoyance 2 or 3 Customer dissatisfaction 4 to 6 High degree of customer dissatisfaction 7 or 8 Safety: regulatory consequence 9 or 10 Other references have different criteria. Terninko (2003) adopts a common idea for the Severity score that ranges from 1 to 10, classified from No One Severity to Catastrophic Severity. This criterion (S2) is presented in the Table 2. IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 5 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Table 2 (S2) Another severity scoring (Adapted from Terninko (2003)) Severity Score No One 1 Very Slight 2 Slight 3 Minor 4 Moderate 5 Significant 6 Major 7 Extreme 8 Serious 9 Catastrophic 10 In the FMEA applications conducted by Tozzi (2004), the reference to score Severity is composed of five intervals correlated to 10 scores, considering customer dissatisfaction as a comparison standard and the use of more than one score for the same situation, as shown in the Table 3 (S3). Table 3 (S3) Severity scoring based on the effect of the failure (Adapted from Tozzi (2004)) Effect Score Minimum (failure affects the system performance at minimum level and the major customers may not perceive this kind of failure) 1 or 2 Low (when causes light customer dissatisfaction due a system light loss of performance or degradation) 3 or 4 Moderate (when causes any customer dissatisfaction due systems malfunctioning or performance reduction) 5 or 6 High (when causes customer dissatisfaction) 7 or 8 Very high (when impacts operation safety or involves government regulatory deviations) 9 or 10 The definition of Severity scores from 1 to 10 is the most observed in practice. But, it is not a mandatory rule. Matos (2004) adopted scores ranging from 1 to 5. This range considers the customer’s perception of the failure with its respective effect. This criterion facilitates the assignments, restricts the number of possible combinations for the Risk Priority Number, and allows different risks grouping for the same situations. This proposal (S4) by Matos (2004) is presented in the Table 4. Table 4 (S4) Severity scoring from 1 to 5 (Adapted from Matos (2004)) Severity Score It is reasonable to expect the customer does not perceive the failure 1 The customer perceives the failure, but does not show dissatisfaction 2 The customer will perceive failures and be dissatisfied 3 The customer will be dissatisfied, but its safety is not affected 4 The customer will be dissatisfied and its safety is affected 5 IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 6 Vol. 2, Issue 1, 2010 ISSN 1936-6744 A criterion for Severity scoring based on the importance of necessity is adopted by Fernandes and Rebelato (2006) and presented in the Table 5. This proposal (S5) does not include intermediate situations scores with the values two, three, five, seven and nine. Table 5 (S5) Severity scoring based on the necessity importance (Adapted from Fernandes and Rebelato (2006)). Importance Criteria Score Very Low Necessity related to the secondary functions, not relevant to the customer 1 Low Necessity related to the secondary functions, relevant to the customer 4 Moderate Necessity related to the primary functions, little relevant to the customer 6 High Necessity related to the primary functions of the product or service 8 Very High Necessity related to the user safety 10 Occurrence. To define an Occurrence score, a potential cause must be interpreted as how the failure could occur, which is described like something that can be corrected or controlled. The idea is to identify the origin of each failure mode, based on historical data. Each of these cases must be classified by an Occurrence score, conventionally estimated in a scale from 1 to 10. Table 6 presents Occurrence scoring (O1) proposed by Braglia (2000), which considers historical data for the Mean Time between Failures (MTBF). Table 6 (O1) Occurrence scoring based on the mean time between failures (Adapted from Braglia (2000)) Occurrence MTBF Score Remote More than 10 years 1 Low 2 to 10 years 2 or 3 Moderate 6 months to 2 years 4 or 6 High 3 to 6 months 7 or 8 Very High Less than 3 months 9 or 10 Ben-Daya and Raouf (1996), Chang et al. (2001) and Yang et al. (2006) define other way to Occurrence scoring (O2): from the probability of failures, without considering the occurrence of potential causes and with a score ranging from 1 to 10, as shown in the Table 7. Table 7 (O2) Occurrence scoring based on the probability of failure (Adapted from Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006)) Occurrence Probability Score Remote chance of Failure Less than 1/20,000 1 Low failure rate Around 1/10,000 2 or 3 Moderate failure rate Around 1/1,000 4 to 6 High failure rate Around 1/100 7 or 8 Very high failure rate Around 1/10 or higher 9 or 10 Terninko (2003) adopted a similar reference to score Occurrence, as shown in the Table 8 (O3). The term “ratio” was preferred instead of “probability”. IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 7 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Table 8 (O3) Occurrence scoring (Adapted from Terninko (2003)) Occurrence Ratio Score Almost Never 3/106 1 Remote 100/106 2 Very Slight 1,000/106 3 Slight 10,000/106 4 Low 150,000/106 5 Medium 300,000/106 6 Moderate High 400,000/106 7 High 500,000/106 8 Very High 666,667/106 9 Almost Certain 900,000/106 10 According to Tozzi (2004), the failure rate should be estimated by applying statistical procedures with historical data collected in similar cases. Otherwise, some subjective analysis would be required. Process capability index (Cpk) based scores for Occurrence is shown in the Table 9 (O4). Table 9 (O4) Occurrence scoring based on the process capability index (Adapted from Tozzi (2004)) Occurrence Probability Cpk Score Minimum (very unlikely occurrence failure) 2.0 or higher 1 1.6 to 2.0 2 Low (rare occurrence failures) 1.2 to 1.6 3 1.0 to 1.2 4 Moderate (occasional occurrence failures) 0.9 to 1.0 5 0.7 to 0.9 6 High (frequent occurrence failures) 0.6 to 0.7 7 0.4 to 0.6 8 Very high (almost inevitable occurrence failures) 0.3 to 0.4 9 0.3 or smaller 10 Table 10 presents the criterion (O5) to score Occurrence in FMEA applications according to Matos (2004). Table 10 (O5) Occurrence scoring from 1 to 5 (Adapted from Matos (2004)) Occurrence of failure Score Very remote probability to occur 1 Low number of occurrences 2 Moderate number of occurrences 3 High number of occurrences 4 Alarming failures 5 Fernandes and Rebelato (2006) proposed Occurrence scores based on failure rates. Historical occurrence of the potential causes of each failure should be considered, as shown in the Table 11 (O6). IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 8 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Table 11 (O6) Occurrence scoring based on failure rates (Adapted from Fernandes and Rebelato (2006)) Occurrence Failure Ratios Score Almost Impossible 1/1,500,000 1 Remote 1/150,000 2 Low 1/15,000 3 Relative Low 1/2,000 4 Moderate High 1/400 5 Moderate High 1/80 6 High 1/20 7 Repeated Failures 1/8 8 Very High 1/3 9 Extreme High 1/2 or less 10 Detection. With the identification of failures consequences the prevention and detection must be addressed. Table 12 presents the scoring criteria (D1) adopted by Braglia (2000), which assesses different combinations of controls to detect a failure. Table 12 (D1) Detection scores from the chance of non-detection (Adapted from Braglia (2000)) Naked eye visible Switchboard or indirectly controllable Visible after an inspection Periodic inspection Score Yes Partial No Direct Indirect No Yes No Yes No X 1 X X X X X 2 X X X X 3 X X X X 4 X X X X 5 X X X X X X X X 6 X X X X X X X X 7 X X X X 8 X X X X 9 X X X X 10 The prevention actions impact on system features, either in terms of product or process, in order to reduce the risk of each failure by the definition of prevention controls. Puente et al. (2002) argue that these controls allow the direct action on potential causes of a particular failure mode, while the detection controls are possible to detect the failure mode before reaching the next operation stage. In addition, it has a control plan and a detection system that act preventively in the process and product, respectively. The Detection scoring (D2) adopted by Ben-Daya and Raouf (1996), Chang et al. (2001) and Yang et al. (2006) appears more consistent for the failure detection evaluation purpose, and considers the defect probability to reach the customer, as shown in Table 13. IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 9 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Table 13 (D2) Detection scoring based on the chance of a defect reaching the customer (Adapted from Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006)) Detection Probability of individual defect reaching the customer Score Very high 0% to 5% 1 High 6% to 15% 2 16% to 25% 3 Moderate 26% to 35% 4 36% to 45% 5 46% to 55% 6 Low 56% to 65% 7 66% to 75% 8 Remote 76% to 85% 9 86% to 100% 10 Terninko (2003) proposed detection frequency based scores (D3), as shown in the Table 14. Table 14 (D3) Detection scoring based on the detection frequency (Adapted from Terninko (2003)) Detection Frequency Detection Frequency Control Performance Score Almost Certain 900,000/106 Detected before problem 1 Very High 666,667/106 2 High 500,000/106 3 Moderate High 400,000/106 4 Medium 300,000/106 Corrective action possible 5 Low 100,000/106 6 Slight 10,000/106 7 Very Slight 1,000/106 8 Remote 100/106 9 Almost Impossible 3/106 Catastrophe occurred 10 Table 15 presents the Detection scoring criterion (D4) in FMEA applications according to Matos (2004). Table 15 (D4) Detection scoring from 1 to 5 (Adapted from Matos (2004)) Detection Score Very high probability to have the failure detected 1 High probability to have the failure detected 2 Medium probability to have the failure detected 3 Low probability to have the failure detected 4 Very low probability to have the failure detected 5 Table 16 presents the criterion (D5) proposed by Leal et al. (2005), which is based on the probability of detecting the failure cause related to the preventive control. Table 16 (D5) Detection scoring (Adapted from Leal et al. (2005)) IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 10 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Detection Preventive control Score Almost Certain The maintenance almost certainly will detect the failure cause 1 Very High Very High chance to detect the failure cause 2 High High chance to detect the failure cause 3 Moderate High Moderate High chance to detect the failure cause 4 Moderate Moderate chance to detect the failure cause 5 Low Low chance to detect the failure cause 6 Very Low Very low chance to detect the failure cause 7 Remote Remote chance to detect the failure cause 8 Very Remote Very remote chance to detect the failure cause 9 Absolutely Uncertain The maintenance does not detect the potential failure cause or the maintenance does not exist 10 Another criterion for Detection scoring (D6) was proposed by Andrade and Turrioni (2000), which focused on environmental analysis, as shown in the Table 17. Table 17 (D6) Detection scoring (Adapted from Andrade and Turrioni (2000)) Detection degree Score The current controls will certainly detect, almost immediately, and the reaction can be instantaneous. 1 or 2 There is a high probability that the aspect is detected soon after its occurrence, being possible a rapid reaction. 3 or 4 There is a moderate probability that the aspect is detected in a reasonable period of time before an action can be taken, but the effects may be seen. 5 or 6 It is unlikely that the failure is detected or it will take more time before an action can be taken, and the effects will be seen. 7 or 8 The failure will not be detected in any time or there is no reaction possible (under normal operating conditions). 9 or 10 3. AHP Application In Section 2 there are 17 reference tables to facilitate the scoring of Detection, Occurrence, and Severity for the FMEA application in a company. The decision problem from the scoring criteria presented in those reference tables is to choose only one criterion for Detection, one for Occurrence, and another one for Severity. To evaluate the referenced tables presented in Section 2, AHP methodology will be applied. This decision- making method was selected because it allows choosing the best alternative considering multiple criteria expressed through qualitative or quantitative values (Saaty, 2001). Montevechi and Pamplona (1996) argued that the use of AHP method is recommended when it involves human opinion. With the AHP, it is possible to build comparison matrices, and then, set the priorities between the alternatives, in our case, the reference tables. One important feature of the AHP method is the consistency checking of the judgments that make the comparison matrix. This is done solving a Linear Algebra problem (Equation 1), where A is the comparison matrix, w is its right eigenvector and max is its principal eigenvalue: A w = max w (1) IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 11 Vol. 2, Issue 1, 2010 ISSN 1936-6744 The consistency checking of the comparison matrices is made comparing max with n, the matrix order, usually with Equation 2, where  is the consistency index. To a comparison matrix be considered consistent,  must be equal or less than 0.1.  = (max – n)/(n – 1) (2) For the analysis at hand, three hierarchies will be used, one for each of the FMEA dimensions: Severity, Occurrence, and Detection. The alternatives in each hierarchy will be the different scoring criteria discussed previously. Therefore, the severity hierarchy has five possible alternatives (S1-S5) corresponding to the five different scoring criteria (Tables 1-5) discussed in the previous section. Similarly, the occurrence hierarchy has six alternatives (O1-O6) corresponding to each of the possible scoring criteria (Tables 6-11). Finally, the detection hierarchy also has six possible alternatives (D1-D6) corresponding to the scoring criteria (Tables 12-17) previously discussed. Tables 18 to 20 present the comparison matrix for each of the three hierarchies. All the judgments were done by a couple of quality engineers who are experienced in FMEA applications in the automotive industry. Table 18 Severity scoring criteria judgments S1 S2 S3 S4 S5 Priority S1 (Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006)) 1 1/3 1/2 2 1 0.14 S2 (Terninko 2003) 1 2 4 3 0.40 S3 (Tozzi 2004) 1 3 2 0.24 S4 (Matos 2004) 1 1/2 0.08 S5 (Fernandes and Rebelato 2006) 1 0.14 Table 19 Occurrence scoring criteria judgments O1 O2 O3 O4 O5 O6 Priority O1 (Braglia 2000) 1 5 4 3 8 4 0.42 O2 (Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006)) 1 1/2 1/3 4 2 0.10 O3 (Terninko 2003) 1 1/2 5 1 0.12 O4 (Tozzi 2004) 1 6 2 0.20 O5 (Matos 2004) 1 1/5 0.03 O6 (Fernandes and Rebelato 2006) 1 0.12 IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 12 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Table 20 Detection scoring criteria judgments D1 D2 D3 D4 D5 D6 Priority D1 (Braglia 2000) 1 4 6 3 7 3 0.42 D2 (Adapted from Ben-Daya and Raouf (1996), Chang et al. (2001), and Yang et al. (2006)) 1 3 1/2 4 1/2 0.12 D3 (Terninko (2003) 1 1/4 2 1/4 0.06 D4 (Matos 2004) 1 5 1 0.19 D5 (Leal et al. 2005) 1 1/5 0.04 D6 (Andrade and Turrioni 2000) 1 0.19 For Tables 18 to 20, the values found for  are, respectively, 0.009, 0.046, and 0,047. As all these values are less than 0.1, the judgments provided by the engineers can be considered as consistent. The main result of the AHP application is the indication of the use of Terninko (2003) to score the Severity, and the use of Braglia (2000) to score both Occurrence and Detection. It is important to state that this result is limited to a company with similar features of the company where the data were collected, that is, an industrial company from Brazil at the end of 2009. 4. Conclusions This paper has first presented different scoring criteria for Detection, Occurrence, and Severity in the context of FMEA application in different situations identified in publications about this topic. The use of AHP has provided a simple way to choose only one criterion for Detection, one for Occurrence, and another one for Severity. It was done by comparing different proposals and prioritizing them within three hierarchies. For future studies, it is suggested to develop an action research in the manufacturing environment through the combination of reference tables for the Process FMEA scores recommended in this work. Another recommendation is the application of others concepts, such as Group Decision Making, to expand the results to a sector or to different companies. REFERENCES Andrade, M.R.S. & Turrioni, J.B. (2000). A method for environmental impacts analysis with FMEA utilisation. Proceedings of 20th National Meeting of Production Engineering. Sao Paulo, Brazil. (In Portuguese). Ahsen, A. (2008). Cost-oriented failure mode and effects analysis. International Journal of Quality & Reliability Management, 25(5), 466 - 476. Ben-Daya, M. & Raouf, A. (1996). A revised failure mode and effects analysis model. International Journal of Quality & Reliability Management, 13(1), 43 - 47. Braglia, M. (2000). MAFMA: multi-attribute failure mode analysis. International Journal of Quality & Reliability Management, 17( 9), 101 -1033. Chang, C.L., Liu, P.H. & Wei, C.C. (2001). Failure mode and effects analysis using grey theory. Integrated Manufacturing Systems, 12 (3), 211-216. IJAHP ARTICLE: Aguiar, de Souza, Salomon / Evaluate Scoring Criteria for Failure Mode and Effect Analysis International Journal of the Analytic Hierarchy Process 13 Vol. 2, Issue 1, 2010 ISSN 1936-6744 Devadasan, S.R., Muthu, S., Samson, R.N. & Sankaran, R.A. (2003). Design of total failure mode and effects analysis program. International Journal of Quality & Reliability Management, 20(5), 551 - 568. Fernandes, J.M.R. & Rebelato, M.G. (2006). Proposal of a method to integrate QFD and FMEA. Gestao Producao,13(2), 245-259. (In Portuguese). Kmenta, S. & Ishii, K. (2000). Scenario-based FMEA: A life cycle cost perspective. Baltimore, MD: ASME Publications. Leal, F., Pinho, A.F. & Almeida, D.A. (2005). Failure Analysis with FMEA and Grey theory. Proceedings of 25th National Meeting of Production Engineering. Porto Alegre, Brazil. (In Portuguese). Matos, R.B. (2004) Performance indicators for sawed wooden improvement in small enterprises: a case study. Master Degree Dissertation (Forest Resources). Piracicaba, Brazil: USP. (In Portuguese). Montevechi, J.A.B. & Pamplona, E.O. (1996) Hierarchy analysis of investments. Proceedings of 16th National Meeting of Production Engineering. Piracicaba, Brazil. (In Portuguese). Palady, P. (1995) Failure Modes and Effects Analysis. Portland, OR: PT Publications. Puente, J., Pino, R., Priore, P. & Fouente, D.L. (2002). A decision support system for applying failure mode and effects analysis. International Journal of Quality & Reliability Management, 19(2), 137-151. Reid, R.D. (2005). FMEA: Something old, something new. Quality Progress, 38, 90-93. Saaty, T.L. (2001). Analytic Hierarchy Process, Decision Making for Leaders. Pittsburgh, PA: RWS Publications. Sharma, R.K., Kumar, D., Kumar, P. (2007). Modeling and analysing system failure behaviour using RCA, FMEA and NHPPP models. International Journal of Quality & Reliability Management, 24(5), 525-546. Society of Automotive Engineers (2001). ARP 5580: Recommended Failure Modes and Effects Analysis (FMEA) Practices for Non-Automobile. USA:G-11 SAE Reliability Committee Publications. Stamatis, D.H. (1995). Failure Mode and Effect Analysis, FMEA from Theory to Execution. Milwaukee, WI: ASQ Quality Press. Teng, S.G., Ho, S.M., Shumar, D. & Liu, P.C. (2006) Implementing FMEA in a collaborative supply chain environment. International Journal of Quality & Reliability Management, 23(2), 179-196. Terninko, J. (2003). Reliability / Mistake-proofing using Failure Mode and Effect Analysis (FMEA). Proceedings of 57th Annual Quality Congress, Kansas City, MI. Tozzi, A.R. (2004). Development of program to check the process of cables releasing with FMEA aid. MBA thesis. Porto Alegre, Brazil: UFRGS. (In Portuguese) Yang, C.C.; Lin, W.T.; Lin, M.Y.; Huang, T. (2006). A study on applying FMEA to improving ERP introduction - An example of semiconductor related industries in Taiwan. International Journal of Quality & Reliability Management, 23(2), 298-322.