Microsoft Word - 169-Comparing+the+Borich+Model+with+The+Ranked+Discrepancy+Model+for+Competency+Assessment+A+Novel+Approach.docx Narine and Harder Advancements in Agricultural Development Volume 2, Issue 3, 2021 agdevresearch.org 1. Lendel K. Narine, Extension Assistant Professor and Evaluation Specialist, Utah State University, 4900 Old Main Hill, Logan, UT 84322, lendel.narine@usu.edu, https://orcid.org/0000-0001-6962-2770 2. Amy Harder, Professor, University of Florida, P.O. Box 112060, Gainesville, FL 32611-2060, amharder@ufl.edu, https://orcid.org/0000-0002-7042-2028 96 Comparing the Borich Model with The Ranked Discrepancy Model for Competency Assessment: A Novel Approach L. K. Narine1, A. Harder2 Abstract In 1980, Borich presented a new model that allowed errors in an individual’s judgment of self- proficiency to be offset by considering the perception of a group. The model relied upon the calculation of means for competency items measured with ordinal scales, an approach subject to debate in modern times. The purpose of our study was to explore the use of a novel approach we developed, the Ranked Discrepancy Model (RDM), as an alternative method to the Borich model for determining training needs. Data obtained from an online survey of extension professionals employed by a land-grant university in the United States was used to compare the training needs identified by applying the Borich model with those identified by applying the RDM. A very strong and statistically significant correlation existed between the scores derived from using both models, demonstrating a high level of consistency between models. Researchers conducting competency research should consider adopting the RDM given its suitability for delivering results that closely resemble findings from the Borich model while providing improved rigor in methods and increased detail about training needs. Keywords Gaps, needs, professional development, extension Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 97 Introduction and Problem Statement In 1980, Borich introduced a new model for assessing educator training needs. Borich’s (1980) use of the group mean to describe the perceived importance of a competency and weight an individual’s proficiency gap revolutionized how training needs were identified by accounting for errors inherent in an individual’s judgment of what was important to know or do. Agricultural and extension education practitioners and scholars alike embraced the Borich model (e.g., Elhamoly et al., 2014; Umar et al., 2017; Waters & Haskell, 1988), and we count ourselves among its many adopters. However, usage of the Borich model is worth reflecting upon more than forty years later to determine its appropriateness for contemporary needs assessment research. One reason to revisit the Borich model is due to an unsettled debate over the use of means to describe items measured on ordinal scales. Means derived from individual ordinal items are an inherent part of calculating the mean weighted discrepancy scores (MWDS) needed in the Borich model. Arguments can be found for and against using means of ordinally-scaled items (e.g., Boone & Boone, 2012; Norman, 2010). The controversy over using means for individual ordinal items impacts the potential acceptance of research conducted using the Borich model within the broader scientific community. A new analytical method is needed to help researchers identify competency training needs efficiently and avoid getting caught in the ordinal mean debate, while preserving the underlying rationale of the Borich model. Theoretical and Conceptual Framework The Borich model is primarily used to determine priority competencies for professional development (Borich, 1980) and is appropriate for use when assessing the needs of a sample, such as extension professionals or agricultural teachers. The Borich model relies on identifying gaps – called discrepancy scores - between a respondent’s perceived ability to perform a particular competency and how important that competency is for job success. The discrepancy score is the difference between how a respondent rates their ability and importance using ordinal scales. In a Borich assessment, a discrepancy suggests an individual does not have sufficient ability to perform an important competency, therefore, a gap exists between the ideal and current conditions. A deviation between an ideal and current condition, or what should be and what is, represents the underlying nature of a need as described by Witkin and Altschuld (1995). For example, a respondent may rate a competency as having above average importance for their job success, but self-report having below average ability. This calculation is analogous to identifying gaps in a quantitative needs assessment process; a gap or discrepancy exists when a current condition is less than an ideal condition (Boyle, 1981; English & Kaufman, 1975; Witkin & Altschuld, 1995). Next, the Borich model uses the perceptions of the sample to estimate a competency’s actual importance (Borich, 1980) by calculating the sample mean for importance. This approach helps to overcome individual errors in judgment. Each respondent’s discrepancy score is weighted by Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 98 multiplying it with the sample mean for importance for a given competency, resulting in an individual’s weighted discrepancy score. Finally, a MWDS is calculated by averaging the weighted discrepancy scores for the entire sample. The MWDS are then used to determine training needs, with positive scores indicating a need for intervention and negative scores indicating a need does not exist since ability exceeds importance. Herein lies two problems with the Borich model. First, the weighted discrepancy score is dependent on the use of item means for importance. Recall that a respondent provides a single judgment of the importance of a competency based on an ordinal scale, resulting in the use of the controversial ordinal scale means (Kuzon et al., 1996). While suitable for reliable psychometric constructs consisting of multiple normally distributed ordinal items, assessing the mean value of a single ordinal item is generally not recommended by researchers (Dillman et al., 2014; Sullivan & Artino, 2013). Second, the MWDS for each competency does not follow an immediately clear and interpretable standardized range. A MWDS ranges from -4 to 20 when using a 5-point response scale. However, any time a researcher opts to use anything except a 5- point response scale (e.g., a 7-point importance scale ranging from Not at all important to Extremely important), the MWDS range changes, creating difficulties in comparing competency needs across different studies. Valid reasons exist for using scales with varying numbers of response anchors. Dillman et al. (2014) stated a unipolar scale such as the type used for assessing competency training needs would be acceptable with only four scale points and this format would decrease the burden on the respondent. Conversely, Preston and Colman (2000) found the scores obtained from using 7- to 10-point scales were more reliable, had improved criterion validity coefficients, and better discriminating power as compared to scales with fewer points. If Borich findings can only be compared across studies when researchers use a 5-point response scale, then this is a limitation of the Borich model. We propose use of the Ranked Discrepancy Model (RDM) as an alternative to the Borich model. Application of the RDM is only appropriate when certain conditions exist: (a) cross-sectional data (Ary et al., 2014) is gathered from a sample or census of a target population at one point in time, (b) data for each variable or item is paired on two ordinal scales with an equivalent number of response anchors, and (c) the objective is to assess discrepancies between two clearly identified states or conditions for each item. These conditions are also necessary for the application of Borich’s (1980) model for determining training needs. The RDM circumvents the two major drawbacks of the Borich model. As a descriptive approach, the RDM avoids the use of means for single items measured with ordinal scales (i.e., individual competency items). It also provides an intuitive standardized score that represents the discrepancy or gap in ability compared to a known state of equilibrium, which is consistent with early needs assessment literature, namely Lewin’s (1939) field theory of motivation. There are three steps in the RDM. An illustrated step-by-step example is included after the findings. First, calculate the number of occurrences in the sample when respondents’ ability ratings are: (a) less than respondents’ importance ratings (Negative Ranks = NR), (b) more than respondents’ importance ratings (Positive Ranks = PR), or (c) equal to respondents’ importance ratings (Tied Ranks = TR). Second, convert the number of occurrences for NR, PR, and TR into Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 99 percentages. Third, assign relative weights (W) to NR% (WNR = -1), PR (WPR = 1), and TR (WTR = 0). The resulting Ranked Discrepancy Score (RDS) is a standardized score ranging between -100 to 100. The RDS has an equilibrium of 0, with negative scores indicating a priority need or discrepancy in ability, and positive scores indicating the absence of a gap or need. Like the MWDS, the RDS provides a snapshot of the professional capacity of an organization with respect to a competency area. Therefore, a negative, equilibrium, or positive RDS does not imply every individual of the sample has inadequate or adequate capacity to perform a specific competency. It assesses priority professional development needs of the sample as whole by accounting for the ability of all individuals within the sample. Like Borich (1980), the RDM does not take a deficit approach to identifying needs by only considering those with negative ranks. In addition, the RDM does not rely on a sample mean for importance, instead, it capitalizes on the frequency distribution of each item. This approach is widely regarded is an appropriate way to handle ordinal items (Sullivan & Artino, 2013), even with nonnormally distributed data. The frequency distribution of importance and ability ratings directly influences the RDS via NR%, PR%, and TR%. The RDM is intended to be an intuitive approach to handling paired needs assessment data. The RDS represents the severity of a need and allows for direct comparison and priority ranking between competencies. Purpose The purpose of the study was to explore the use of the RDM as an alternative method to the Borich model for determining training needs. The objectives were to: 1. Describe the unweighted rank frequencies and RDS for program planning and program evaluation competencies. 2. Compare MWDS and RDS for program planning and program evaluation competencies. 3. Describe the relationship between scores resulting from the application of the Borich model and RDM. Methods Our study used Borich-type data from a competency assessment conducted at the University of Florida in 2021. A convenience sample of county agents was taken by surveying those who were registered (N = 276) for an annual professional development symposium. With a 58.30% response rate, the sample consisted of 161 individuals (n = 161). However, an examination of the dataset revealed several incomplete responses and/or majority missing values. Partial responses were removed from the dataset, and the final usable sample was 122 county agents (n = 122, 44.20%). A small number of missing values (< 1% of observations) in the final dataset were determined to be missing at random. For each competency item, missing values were replaced with the corresponding sample mean for that item to maintain the initial distribution properties of the data (Dodeen, 2003). Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 100 Survey data were gathered using a researcher-made questionnaire (Ary et al., 2014). The questionnaire consisted of a list of Extension core competency items related to program planning and evaluation. Selected items were consistent with previous studies in the subject area (e.g., Harder et al., 2010; Lakai et al., 2012; Lindner et al., 2010; Maddy et al., 2002; Narine & Ali, 2020; Scheer et al., 2011; Suvedi & Kaplowitz, 2016). There were 17 competencies in program planning and 15 competencies in program evaluation. Following Borich’s (1980) approach, respondents were first asked to rate their ability to perform each competency using a 5-point ordinal scale with the following options: 1 = None, 2 = Below Average, 3 = Average, 4 = Above Average, and 5 = Exceptional. Following, respondents were asked to rate the degree to which each competency was important to their job success using a 5-point ordinal scale with the following options: 1 = None, 2 = Below Average, 3 = Average, 4 = Above Average, and 5 = Essential. Data were analyzed using the Borich model and RDM to compare findings. Complete details about how to calculate training needs according to Borich (1980) can be found by visiting the original reference. Negative Ranks (NR), Positive Ranks (PR), and Tied Ranks (TR) were calculated using IBM SPSS Statistics (Version 27) by performing the Wilcoxon rank test between paired responses (i.e., observations for ability and importance) for each competency item. The first variable entered in the option window was perceived importance, then paired with self-assessed ability. An automatic output table with NR, PR, and TR was generated. Then, rank values (i.e., number of occurrences for NR, PR, and TR) from the SPSS output table were exported to Microsoft Excel to perform Steps 2 and 3. After finding the number of occurrences of NR, PR, and TR for each competency item, the next step was to convert the three rank counts into percentages. The final step was to apply weights to NR%, PR%, and TR%. The RDS was calculated as follows: RDS = NR% (-1) + PR% (1) + TR% (0). In practice, the last expression in the equation (TR% [0]) naturally drops off, leaving RDS = NR% (-1) + PR% (1). However, TR is important to the model as it affects percentages for NR and PR. With weights applied, the RDS equals -100 if all individuals have a negative discrepancy in ability relative to a competency’s importance (i.e., 100% NR). A negative RDS trending to -100 reflects the magnitude of the discrepancy for one competency item and is directly comparable to the RDS for other competencies. In contrast, the RDS will equal (+) 100 when all individuals have a positive discrepancy in ability relative to a competency’s importance. As mentioned, the RDS should be interpreted as representing the overall capacity of the sample to perform a competency; it indicates the needs of the sample as a whole. Findings Table 1 provides the unweighted rank frequencies used to calculate the RDS for program planning competency items. After applying weights to NR (-1), PR (1), and TR (0), the RDS shows discrepancies in each item from the point of equilibrium (0). All items in Table 1 had a negative RDS, indicating a gap in ability to perform all program planning competencies. The RDS also shows the magnitude of the gap since all items are directly comparable on a standardized score Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 101 between -100 and 100. Based on the results, the top three priority competency items for attention are to (a) develop long-term program objectives (RDS = -61), (b) conduct a needs assessment for your program (RDS = -60), and (c) use the results of a needs assessment for planning (RDS = -57). Table 1 Ranks and Ranked Discrepancy Scores for Program Planning Program planning Ranks (%) RDS NR PR TR Develop long-term (social, economic, environmental) program objectives 63 2 35 -61 Conduct a needs assessment for your program 69 9 22 -60 Use the results of a needs assessment for planning 61 4 34 -57 Develop medium-term (behavior change) program objectives 56 5 39 -51 Translate needs assessment information into a situation statement 54 7 39 -48 Develop long-term Extension program plans (extending beyond 2-3 years) 52 7 41 -46 Establish programming priorities 48 6 46 -43 Align program priorities at the local level with the Extension Roadmap 48 8 43 -40 Organize an effective program advisory committee 46 7 48 -39 Assess available local/community resources 44 7 49 -38 Conduct interviews to obtain information for planning 45 8 47 -37 Develop short-term (knowledge, attitude, skill, aspiration) program objectives 44 8 48 -36 Develop an annual plan of work 44 9 47 -35 Develop a logic model for a planned program 44 11 44 -33 Develop monthly work schedule 35 10 55 -25 Consult professionals with knowledge and experience about planning educational activities 34 12 53 -22 Develop weekly work schedule 33 13 54 -20 Table 2 shows the unweighted rank frequencies and RDS for program evaluation competency items. Based on the RDS, there was a gap in ability to perform all competency items related to evaluation. The largest discrepancy identified by the RDS was to “conduct follow-up surveys to measure behavior change” (RDS = -58). Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 102 Table 2 Ranks and Ranked Discrepancy Scores for Program Evaluation Program evaluation Ranks (%) RDS NR PR TR Conduct follow-up surveys to measure behavior change (e.g., practices adopted) 60 2 39 -58 Write interview and/or focus group questions 59 4 37 -55 Establish measurable objectives for evaluating the success or failure of a program 59 5 36 -54 Communicate evaluation information to stakeholders 60 6 34 -54 Use evaluation results to improve your program 57 4 39 -52 Clearly distinguish between program outputs and outcomes 54 2 44 -52 Develop intended outcomes that relate to the measurable objectives 57 7 37 -50 Analyze findings from evaluation activities 52 2 45 -50 Write survey questions 57 7 35 -50 Interpret findings from evaluation activities 52 4 43 -48 Prepare reports on program outcomes using evaluation findings 53 5 42 -48 Design valid pre- and post-tests 54 8 38 -46 Align local impact data with UF/IFAS Extension Roadmap 51 7 42 -43 Use online survey tools such as Qualtrics to collect data 50 8 42 -42 Monitor Extension program activities 36 7 57 -29 Table 3 provides a comparison between the Borich model and RDM for program planning competency items. Both models confirmed a discrepancy in ability for all program planning competencies. In Table 3, competency items were ranked based on the discrepancy in ability, which is translated as priorities for professional development; a positive MWDS and negative RDS indicate a need for professional development. In Table 3, the top three priority competencies were the same for the Borich model and RDM. Further, nine of the top 10 items were equivalent across models, with the only exception being “align program priorities at the local level with the Extension Roadmap.” This item was ranked 11th in the Borich model and 8th in the RDM. Meanwhile, “develop short-term program objectives” was ranked 10th in the Borich model and 12th in the RDM. Lastly, the five items of lowest priority were equivalent across both models. Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 103 Table 3 MWDS Compared to RDS for Program Planning Planning Scores Rank by model MWDS RDS Borich RDM Develop long-term (social, economic, environmental) program objectives 4.04 -61 1 1 Conduct a needs assessment for your program 3.77 -60 2 2 Use the results of a needs assessment for planning 3.34 -57 3 3 Develop medium-term (behavior change) program objectives 2.88 -51 4 4 Translate needs assessment information into a situation statement 2.64 -48 6 5 Develop long-term Extension program plans (extending beyond 2-3 years) 2.77 -46 5 6 Establish programming priorities 2.42 -43 8 7 Align program priorities at the local level with the Extension Roadmap 2.08 -40 11 8 Organize an effective program advisory committee 2.48 -39 7 9 Assess available local/community resources 2.21 -38 9 10 Conduct interviews to obtain information for planning 1.88 -37 12 11 Develop short-term (knowledge, attitude, skill, aspiration) program objectives 2.09 -36 10 12 Develop an annual plan of work 1.86 -35 13 13 Develop a logic model for a planned program 1.75 -33 14 14 Develop monthly work schedule 1.27 -25 15 15 Consult professionals with knowledge and experience about planning educational activities 1.17 -22 16 16 Develop weekly work schedule 0.97 -20 17 17 Table 4 compares rankings of program evaluation competency items between the Borich model and RDM. Like program planning, both models confirmed there were discrepancies in all competency items relating to evaluation. The top three priority items were similar across models, albeit with differed ordering. While “communicate evaluation information to stakeholders” was ranked as the highest priority in the Borich model, “conduct follow-up surveys to measure behavior change” was ranked highest in the RDM. Similar to program planning items, nine of top ten evaluation competency items were consistent across models. While “analyze findings from evaluation activities” was ranked 12th in the Borich model, it was ranked 9th in the RDM. Also, “design valid pre- and post-tests” was ranked 9th in the Borich model, but ranked 12th in the RDM. Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 104 Table 4 MWDS Compared to RDS for Program Evaluation Evaluation Scores Rank by model MWDS RDS Borich RDM Conduct follow-up surveys to measure behavior change (e.g., practices adopted) 3.73 -58 2 1 Write interview and/or focus group questions 3.48 -55 3 2 Communicate evaluation information to stakeholders 3.75 -54 1 3 Establish measurable objectives for evaluating the success or failure of a program 3.40 -54 4 4 Clearly distinguish between program outputs and outcomes 3.27 -52 7 5 Use evaluation results to improve your program 3.28 -52 6 6 Write survey questions 3.37 -50 5 7 Develop intended outcomes that relate to the measurable objectives 2.96 -50 10 8 Analyze findings from evaluation activities 2.89 -50 12 9 Prepare reports on program outcomes using evaluation findings 3.03 -48 8 10 Interpret findings from evaluation activities 2.92 -48 11 11 Design valid pre- and post-tests 2.97 -46 9 12 Align local impact data with UF/IFAS Extension Roadmap 2.60 -43 14 13 Use online survey tools such as Qualtrics to collect data 2.63 -42 13 14 Monitor Extension program activities 1.65 -29 15 15 Figure 1 illustrates the observed distances between MWDS and RDS. Absolute z-scores were used for comparison due to the inverted interpretation of scores between models; a positive MWDS in the Borich model and a negative RDS in the RDM represent a need or discrepancy. The figure shows a clear relationship between MWDS and RDS; scores followed a similar pattern across all 32 competency items. A Pearson’s test revealed a very strong correlation between scores (r = 0.98), demonstrating a high level of consistency between models. Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 105 Figure 1 Relationship Between Scores in the Borich Model and Ranked Discrepancy Model Analytical Steps in the RDM Getting Started: Gather Borich-type competency data. Figure 2 provides a sample item from a competency assessment questionnaire. Figure 2 Program Planning Item in a Competency Assessment Figure 3 shows sample raw data in SPSS for five respondents. On the left shows the value labels for each respondent (i.e., row), and on the right shows the coded value. The sample data view is the first place to observe the number of occurrences in the sample when respondents’ ability ratings are: (a) less than respondents’ importance ratings (Negative Ranks = NR), (b) more than respondents’ importance ratings (Positive Ranks = PR), or (c) equal to respondents’ importance ratings (Tied Ranks = TR). For example, Respondent 1 has a positive rank, Respondent 2 has a negative rank, and Respondent 4 has a tied rank. 0.0 0.5 1.0 1.5 2.0 2.5 Competency Items (i = 32) z-MWDS (Abs) z-RDS (Abs) Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 106 Figure 3 Sample Competency Data Viewed in SPSS Step 1: Calculate the number of occurrences for negative ranks (NR), positive ranks (PR), and tied ranks (TR) in SPSS (see Figure 4). In SPSS, run the test as follows: • Analyze → Nonparametric Tests → Legacy Dialogs → 2 Related Samples. • For each competency item, enter responses for Importance (Variable 1) and Ability (Variable 2). • Repeat entries in Pairs for each competency item. Figure 4 The Wilcoxon Test Window in SPSS (v. 27) The resulting SPSS Output will provide Negative Ranks, Positive Ranks, and Tied Ranks as shown in Figure 5. Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 107 Figure 5 Ranks Generated in the Output Window for the Wilcoxon Test in SPSS (v. 27) Step 2: Convert the number of occurrences of NR, PR, and TR into percentages in Excel. Copy output from SPSS to Excel and calculate NR%, PR%, and TR%. Figure 6 shows the data structure in Excel. From Figure 5: • NR% = (NR/Sample Size) x 100 → (3/5) x 100 = 60 • PR% = (PR/Sample Size) x 100 → (1/5) x 100 = 20 • TR% = (TR/Sample Size) x 100 → (1/5) x 100 = 20 Figure 6 Basic Data Structure of RDM Data in Excel Step 3: Assign relative weights to NR% (WNR = -1), PR% (WPR = 1), and TR% (WTR = 0). From Figure 6, weights were assigned by multiplying the percentage for each rank by the corresponding weight as follows: • NR% x (-1) → 60 x (-1) = -60 • PR% x (1) → 20 x (1) = 20 • TR% x (0) → 20 x (0) = 0 The final Ranked Discrepancy Score (RDS) is calculated by summing the weighted score for each rank as follows: • NR% (-1) + PR% (1) + TR% (0) = RDS → (-60) + 20 + 20 = -40 Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 108 Conclusions, Discussion, and Recommendations We sought to determine if the RDM could serve as a suitable alternative to the Borich model. Our goal was to retain Borich’s (1980) emphasis on using group perception to determine when or if gaps in competency should be considered priorities for training. Our findings support the utility of the RDM approach as a compatible alternative to the Borich model. The comparison of RDS to MWDS showed a great deal of consistency in rankings, despite the RDM discarding the use of the group mean in calculations seen in Borich’s (1980) model. Nine of the 17 program planning competencies had the same ranking when calculated using the RDM and Borich model approaches. No ranking was more than three places apart and five rankings were within one place of each other. For program evaluation, the same trend was observed. Four of fifteen competencies had the same ranking when calculated with either model while another four competencies were within one place apart. The remaining competencies were no more than three places apart. The visual comparison of absolute z- scores and the results of the correlational analysis further confirm that a strong relationship exists between competency gaps identified by the RDM and Borich model. An implication of our finding is that calculating the group mean is not required for determining competency gaps. Researchers conducting competency research should consider adopting the RDM given its suitability for delivering results that closely resemble findings from the Borich model. Adopting the RDM allows researchers to avoid their work being scrutinized for the use of means for individual ordinal items (e.g., Boone & Boone, 2012; Norman, 2010). The standardization of RDS, regardless of how many scale items are used to measure importance and ability, offers researchers the improved ability to compare their work with prior studies of the same competencies to determine how closely their findings match others. Another advantage of the RDM is that it decreases the complexity of interpreting results. The Borich model yields positive MWDS when training is needed. In our opinion, this is not intuitive given that many readers will have matriculated through a school system in which the goal was to score as close to 100 as possible to demonstrate mastery of a subject. We borrowed the same logic for the RDM. Instead of positive scores indicating a lack of competence, the RDM provides a negative RDS when training needs are greater (i.e., there are many individuals lacking sufficient ability and few individuals with an abundance of ability), which more clearly conveys that a problem exists that should be corrected. Therefore, the RDS demonstrates the magnitude of a discrepancy and maintains the underpinnings of a need as described by Witkin and Altschuld (1995), or a motivational force as discussed by Lewin (1939). The use of a standardized range between -100 and 100 with the RDM is cognitively easier to interpret than a range that varies based on the number of response anchors, but often runs from -4 to 20 with the Borich model using a 5-point semantic scale. However, as seen in our findings, MWDS often range between 2 and 4, making it seem like the magnitude of a training need is quite small even when a score of 4 indicates a serious gap in proficiency. The example of developing long-term program objectives illustrates the difference; compare the MWDS of Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 109 4.04 (maximum possible score = 20) versus the RDS of -61 (minimum possible score = -100). The RDM does a superior job at showing the magnitude of the gap. We want to be clear that the proper use of RDM approach requires the consideration of all three rank categories: PR, NR, and TR. The reason is that the RDS scores represent the capacity of the sample, inclusive of individuals who are excellent at a given competency and those who lack the necessary ability. For example, developing long-term objectives has a slightly higher RDS (-61) than conducting a needs assessment (-60), despite a greater percentage of NR for the latter competency. However, there is 9% of the sample that reported greater ability than necessary for conducting needs assessments while only 2% of the sample said the same for developing long-term objectives. Cumulatively, this sums up to a greater amount of capacity for conducting needs assessments in the overall sample. In practice, knowing what percentage of the sample has more ability than needed is helpful for assessing whether professional development strategies based on peer-to-peer learning, mentoring, or coaching may be effective. A skilled staff development professional should develop interventions that build upon existing assets, including human capacity. Quantitative research in extension often relies on ordinal data; we commonly operationalize constructs to test theories and develop ordinal rating scales to measure psychological variables. As such, the Borich model has been widely applied in extension over the past forty years due to its ability to provide meaningful insights on the professional development needs of professionals. However, with ongoing philosophical and statistical debates in academia, we must be able to justify our analytical approaches to the wider scientific community. The RDM provides an innovative and defensible approach for researchers and practitioners interested in using needs assessment data to determine competency gaps when planning their professional development interventions. References Ary, D., Jacobs, L. C., Sorenson, C., & Walker, D. A. (2014). Introduction to research in education (9th ed.). Wadsworth. Boone, H. N., Jr., & Boone, D. A. (2012). Analyzing Likert data. Journal of Extension, 50(2), Article 48. https://tigerprints.clemson.edu/joe/vol50/iss2/48 Borich, G. D. (1980). A needs assessment model for conducting follow-up studies. The Journal of Teacher Education, 31(3), 39–42. https://doi.org/10.1177/002248718003100310 Boyle, P. G. (1981). Planning better programs. McGraw-Hill. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley. Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 110 Dodeen, H. M. (2003). Effectiveness of valid mean substitution in treating missing data in attitude assessment. Assessment & Evaluation in Higher Education, 28(5), 505-513. https://doi.org/10.1080/02602930301674 Elhamoly, A. I. M. A., Koledoye, G. F., & Kamel, A. (2014). Assessment of training needs for Egyptian extension specialists (SMSs) in organic farming field: Use of Borich need model. Journal of Agricultural and Food Information, 15(3), 180–190. https://doi.org/10.1080/10496505.2014.921110 English, F., & Kaufman, R. A. (1975). Needs assessment: A focus for curriculum development. Association for Supervision and Curriculum Development. https://files.eric.ed.gov/fulltext/ED107619.pdf Harder, A., Place, N. T., & Scheer, S. D. (2010). Towards a competency-based Extension education curriculum: A Delphi study. Journal of Agricultural Education, 51(3), 44–52. https://www.jae-online.org/index.php/back-issues/37-volume-51-number-3-2010/84- towards-a-competency-based-extension-education-curriculum-a-delphi-study Kuzon, W. M., Urbanchek, M. G., & McCabe, S. J. (1996). The seven deadly sins of statistical analysis. Annals of Plastic Surgery, 37(3), 265–272. https://journals.lww.com/annalsplasticsurgery/Abstract/1996/09000/The_Seven_Deadl y_Sins_of_Statistical_Analysis.6.aspx Lakai, D., Jayaratne, K. S. U., Moore, G. E., & Kistler, M. J. (2012). Barriers and effective educational strategies to develop Extension agents’ professional competencies. Journal of Extension, 50(4). https://tigerprints.clemson.edu/joe/vol50/iss4/18/ Lewin, K. (1939). Field theory and experiment in social psychology. American Journal of Sociology, 44(6), 868–896. https://doi.org/10.1086/218177 Lindner, J. R., Dooley, K. E., & Wingenbach, G. J. (2010). A cross-national study of agricultural and extension education competencies. Journal of International Agricultural and Extension Education, 10(1), 51–59. https://doi.org/10.5191/jiaee.2003.10107 Maddy, D. J., Niemann, K., Lindquist, J., & Bateman, K. (2002). Core competencies for the Cooperative Extension System. https://apps.msuextension.org/jobs/forms/Core_Competencies.pdf Narine, L. K., & Ali, A. D. (2020). Assessing priority competencies for evaluation capacity building in Extension. Journal of Human Sciences and Extension, 8(3), 58–73. https://www.jhseonline.com/article/view/919/856 Narine and Harder Advancements in Agricultural Development https://doi.org/10.37433/aad.v2i3.169 111 Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advancements in Health Science Education, 15, 625–632. https://doi.org/10.1007/s10459-010-9222-y Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validating, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1-15. https://doi.org/10.1016/S0001-6918(99)00050-5 Scheer, S. D., Cochran, G. R., Harder, A., & Place, N. T. (2011). Competency modeling in Extension education: Integrating an academic Extension education model with an Extension human resource management model. Journal of Agricultural Education, 52(3), 64–74. https://doi.org/10.5032/jae.2011.03064 Sullivan, G. M., & Artino, A. R., Jr. (2013). Analyzing and interpreting data from Likert-type scales. Journal of Graduate Medical Education, 5(4), 541–542. https://doi.org/10.4300/JGME-5-4-18 Suvedi, M., & Kaplowitz, M. (2016). What every extension worker should know: Core competency handbook. US Agency for International Development. https://agrilinks.org/sites/default/files/resource/files/MEAS%20(2016)%20Extension%2 0Handbook%20%20Suvedi%20Kaplowitz%20-%202016_02_15.pdf Umar, S., Man, N., Nawi, N. M., Latif, I. A., & Samah, B. A. (2017). Core competency requirements among extension workers in peninsular Malaysia: Use of Borich’s needs assessment model. Evaluation and Program Planning, 62, 9–14. https://doi.org/10.1016/j.evalprogplan.2017.02.001 Waters, R. G., & Haskell, L. J. (1988). Identifying staff development needs of Cooperative Extension faculty using a modified Borich needs assessment model. Journal of Agricultural Education, 30(2), 26–32. https://doi.org/10.5032/jae.1989.02026 Witkin, B. R., & Altschuld, J. W. (1995). Planning and conducting needs assessments: A practical guide. SAGE. © 2021 by authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).