Article Information Authors: Clinton Carbutt1 Peter S. Goodman2 Affiliations: 1Biodiversity Research and Assessment, Scientific Services, Ezemvelo KZN Wildlife, South Africa2Private consultant, Conservation Solutions, South Africa Correspondence to: Clinton Carbutt Postal address: PO Box 13053, Cascades 3202, South Africa Dates: Received: 03 Sept. 2012 Accepted: 23 July 2013 Published: 15 Oct. 2013 How to cite this article: Carbutt, C. & Goodman, P.S., 2013, ‘How objective are protected area management effectiveness assessments? A case study from the iSimangaliso Wetland Park’, Koedoe 55(1), Art. #1110, 8 pages. http://dx.doi.org/10.4102/ koedoe.v55i1.1110 Copyright Notice: © 2013. The Authors. Licensee: AOSIS OpenJournals. This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How objective are protected area management effectiveness assessments? A case study from the iSimangaliso Wetland Park In This Original Research... Open Access • Abstract • Introduction • Methods    • The study area    • National and provincial assessments       • National assessment and attendees       • Provincial task team, participatory assessment and attendees    • Comparing the national and provincial assessments within a framework of best practice    • Provincial assessment methodology       • Normalising the scores       • Pressures and threats assessment       • Correlation exercises • Results    • Provincial assessment       • Comparison of normalised scores per management unit       • Pressures and threats       • Correlation exercises: Relationship between management effectiveness, resource inputs and pressures and threats    • National assessment    • Parity in results between national and provincial assessments? • Discussion    • Disparity between national and provincial scores    • The need for robust and standardised operating procedures       • ‘The right players’: The careful selection of a broad range of relevant staff       • ‘Let them speak’: Fostering the peer review dynamic       • ‘Do not rush’: Allow for sufficient time and a sufficient level of detail    • The added value of a pressure and threats assessment • Conclusion • Acknowledgements    • Competing interests    • Authors’ contributions • References Abstract Top ↑ The assessment of protected area management effectiveness was developed out of a genuine desire to improve the way protected areas are managed and reported on, in relation to a formalised set of conservation objectives. For monitoring and reporting purposes, a number of participatory methods of rapidly assessing management effectiveness were developed. Most rapid assessment methods rely on scoring a range of protected area-related activities against an objective set of criteria documented in a formal questionnaire. This study evaluated the results of two applications of the same management effectiveness assessment tool applied to the same protected area, namely the iSimangaliso Wetland Park, South Africa. The manner in which the assessments were undertaken differed considerably and, not unexpectedly, so did the results, with the national assessment scoring significantly higher than the provincial assessment. Therefore, a further aim was to evaluate the operating conditions applied to each assessment, with a view to determining which assessment was more closely aligned with best practice and hence which score was more credible. The application of the tool differed mainly with respect to the level of spatial detail entered into for the evaluation, the depth and breadth of the management hierarchy that was consulted, the time in which the assessment was undertaken and the degree of peer review applied. Disparate scores such as those obtained in the assessments documented here are likely to bring the discipline of management effectiveness assessment into disrepute unless an acceptable and standardised set of operating procedures is developed and adopted. Recommendations for such a set of ‘indispensable constants’ were made in this article to ensure that management effectiveness assessments remain robust and reputable, thereby ensuring an honest picture of what is happening on the ground.Conservation implications: We proposed that standard operating procedures should be in place when protected area management effectiveness assessments are undertaken, in order for the results to be credible. This involves ensuring that the right people participate and that each participant is allowed sufficient time to peer review each other. Introduction Top ↑ Protected areas are amongst the most efficient and cost-effective ways of conserving biodiversity (Balmford, Leader-Williams & Green 1995) and therefore form the cornerstone of most conservation strategies (Bertzky et al. 2012; Hockings 2003). The global protected area estate has increased significantly over the past decade, covering some 12.7% of the world’s terrestrial and inland water areas and 1.6% of the global ocean area (Bertzky et al. 2012). This amounts to a global protected area estate of 23 million km2, making protected areas the world’s largest form of planned land use (Bertzky et al. 2012; Convention on Biological Diversity [CBD] 2010; Dudley et al. 2010; Ervin 2007; Leverington, Hockings & Costa 2008). Despite this significant area under protection, the current system of protected areas does not adequately conserve a representative sample of the world’s biodiversity at any level in the biodiversity hierarchy (i.e. landscapes, ecosystems, communities, species and genetics) (CBD 2010; McNeely, Harrison & Dingwall 1994). Furthermore, protected areas are facing increasing pressures and threats such as habitat loss, fragmentation, isolation, illegal exploitation, invasive species, lack of capacity, inappropriate policies and incentives, the inequitable distribution of costs and benefits, breaches in security and global climate change (International Union for Conservation of Nature [IUCN] 2004; Leverington et al. 2008; Leverington et al. 2010). This suite of external and internal pressures is affecting the conservation community’s ability to manage its conservation estate effectively, thereby undermining its overall contribution to biodiversity conservation (CBD 2010). Management effectiveness is defined by the IUCN World Commission on Protected Areas (WCPA) as the assessment of how well a protected area is being managed – primarily the extent to which it is protecting the values and achieving goals and objectives of the protected area (Hockings, Stolton & Dudley 2000; Hockings et al. 2006). Such assessments have generally considered four areas: protected area design (both individual sites and systems) (Leverington et al. 2008; Leverington et al. 2010), appropriateness of management systems and processes (Leverington et al. 2008; Leverington et al. 2010), delivery of objectives (Ervin 2003a; Hockings 2003; Leverington et al. 2008; Leverington et al. 2010) and ecological integrity (Ervin 2003b; Parrish, Braun & Unnasch 2003). If applied broadly across an entire organisation, protected area management effectiveness assessments can enable policymakers to refine their conservation strategies, re-allocate budget expenditures and develop strategic, system-wide responses to the most pervasive threats and management weaknesses (Ervin 2003b). Protected area management effectiveness assessments are therefore not performance assessments of an individual; rather, they reflect how well a conservation authority is managing its entire conservation estate at the protected area level (Carbutt & Goodman 2010). Protected area management effectiveness is now a well-developed branch of protected area monitoring and evaluation; a number of tools have been developed since the mid–1990s to help assess the effectiveness of protected area management (Hockings 2003). However, a diverse range of circumstances and needs require different methods of assessment (World Wildlife Fund [WWF] & World Bank 2007). Therefore the IUCN’s WCPA, under the guidance of a management effectiveness task force, has developed a framework for the development and standard-setting of a range of management effectiveness methodologies within a consistent overall approach (Hockings et al. 2000; Hockings 2003; Hockings et al. 2006). Currently, some 54 assessment tools have been developed in line with this framework (Leverington et al. 2010); the choice of the appropriate tool depends on the information and management needs of the individual or organisation applying the tool. In this respect, Ezemvelo KZN Wildlife has applied protected area management effectiveness assessment methodologies for organisational performance assessment since 2002 (Carbutt & Goodman 2010; Goodman 2002, 2003a, 2003b). Management effectiveness assessments will always be criticised for being ‘soft science’ as they are rapid and qualitative in nature, essentially devoid of independently collected empirical data. The risk of using qualitative assessments is that the scoring system can be subject to one-sided opinions and perspectives in the absence of peer review, thereby introducing subjectivity and bias. However, such assessments remain the primary means for undertaking an appraisal of management effectiveness to rapidly report on progress over time (WWF & World Bank 2007). Therefore, the onus lies on the assessor to conduct the assessment under strict and consistent operating conditions to ensure that the technique remains robust, objective and reputable. The application of the tool should therefore comply with best practice (see Ervin 2007; Hockings 2003; Hockings et al. 2000; Hockings et al. 2006; Leverington & Hockings 2004; Leverington et al. 2008; Leverington et al. 2010; Williams 2011; WWF & World Bank 2007). The national and provincial assessments of the iSimangaliso Wetland Park (iSWP), listed in 1999 as South Africa’s first World Heritage Site (WHS) (Figure 1), afforded the opportunity to compare data parity under two different circumstances. The two primary aims of this study were to, (1) compare the results (outcomes) of the two assessments in light of the manner in which each assessment was conducted given that essentially the same questionnaire was used and (2) recommend the appropriate standard methodology that should be applied to ensure that the results of these qualitative assessments are as robust and objective as possible, thereby reflecting an accurate and credible representation of management effectiveness on the ground. A secondary aim of this study was to determine whether or not a separate pressure and threats assessment is able to add further value to the monitoring, evaluation and mitigation components of the adaptive management framework. FIGURE 1: Location of the iSimangaliso Wetland Park in KwaZulu-Natal, South Africa. Methods Top ↑ The study area The non-marine component of the iSWP comprises 11 management units (MUs) accounting for a total area of 230 164 ha (Figure 1). Each MU is linked to an operational budget. The national and provincial assessments did not include the two marine protected areas of the iSWP, namely the Maputaland and St Lucia Marine protected areas, which were assessed by an independent consultant (Tunley 2009) on behalf of Marine and Coastal Management as part of a separate management effectiveness assessment of South Africa’s marine protected areas. The national assessment focused on the iSWP as a whole, whereas the Ezemvelo KZN Wildlife (‘provincial’) assessment was applied at the MU level. In this regard, Lake St Lucia was included as the ‘unofficial’ 11th MU, even though it does not carry a designated budget and staff establishment. Some MUs are dominated by terrestrial habitats, whilst others are dominated by aquatic habitats. Furthermore, the uMkhuze MU included Lower uMkhuze, whilst the ‘St Lucia conservation’ MU referred to the St Lucia ‘municipal’ area to the Cape Vidal gate and included the lower Narrows (Figure 1). Dukuduku and Makasa Nature Reserve did not form part of this assessment. The mean effectiveness score was weighted by the area of each MU. National and provincial assessments The national mandate stems essentially from the CBD’s ambitious ‘Programme of work on protected areas’ (Dudley et al. 2005). The development of this programme stemmed from the outcomes of the 5th IUCN World Parks Congress held in September 2003 in Durban, South Africa, which made protected area management effectiveness one of its seven major themes and one of the key messages delivered to the CBD (IUCN 2004). In response to this international obligation, the national Department of Environmental Affairs (DEA) initiated a nationwide project to assess the management effectiveness of South Africa’s WHS and national parks. Since its listing as a WHS, the iSWP has been governed by an over-arching Wetland Park Authority appointed by national government, in addition to the long established de facto on-the-ground conservation management agency, Ezemvelo KZN Wildlife, acting on behalf of provincial government. As the national assessment only involved the Wetland Park Authority, it was essential to undertake an additional assessment involving the conservation management agency on the ground (‘provincial assessment’). The national assessment championed by the DEA opted for the second edition of the management effectiveness tracking tool (METT) developed by the WWF and the World Bank in 2007 (see Britton 2010); this assessment tool was based on the IUCN-WCPA evaluation framework (Hockings et al. 2000; Hockings et al. 2006). The METT is a rapid, site-level, qualitative assessment tool based on an expert scoring approach (Hockings et al. 2006) that assesses (depending on which version is used) all six elements of protected area management identified in the WCPA framework (see Hockings et al. 2000; Hockings 2003), namely establishing the context of existing values and threats, followed by adequate planning and the allocation of adequate resources (inputs) and, as a result of management actions (processes), eventually produces products (outputs) that result in impacts that can be measured against set objectives (outcomes). Ezemvelo KZN Wildlife therefore also made use of the METT for its assessments, as did CapeNature in January 2009 (DEA 2009), given that all data would feed into the national assessment programme. Furthermore, the provincial assessment made use of aspects of the WWF rapid assessment and prioritisation of protected area management (RAPPAM) tool (Ervin 2003c), which effectively quantifies the total pressures and threats faced by each protected area under assessment, given that the METT is too limited to allow a detailed evaluation of protected area outcomes, as well as the assessment of pressures and threats (WWF & World Bank 2007). The pressures and threats assessment was a separate component of the overall assessment. National assessment and attendees The national assessment undertaken on 16 April 2010 comprised seven participants (officials of the Wetland Park Authority, all of which are a cohort of senior management) and was completed in 2 h. The management staff of Ezemvelo KZN Wildlife, the de facto on-the-ground management agency, did not partake in the assessment. The iSWP was assessed as a whole and not per MU. The first author sat in as an observer. The national assessment comprised only the METT questionnaire. Provincial task team, participatory assessment and attendees The application of the protected area management effectiveness (PAME) programme across KwaZulu-Natal (including the iSWP) was guided by the Ezemvelo KZN Wildlife PAME task team. Protected area members of staff were sensitised to the programme through a series of workshops prior to the assessments. The Ezemvelo KZN Wildlife version of the METT was endorsed by the national assessment programme of DEA, with some minor improvements made by Ezemvelo KZN Wildlife being carried into the national METT assessment tool. This involved the addition of the ‘protected area outcomes’ category, which had, up to this point, not been included in the early development of the national METT tool. Furthermore, the assessment involved three exercises, (1) a cover sheet that captured details of the protected area such as size (area), number of staff, annual operational budget, primary management objectives, protected area values et cetera, (2) the METT questionnaire and (3) the pressures and threats analysis. The 16 Ezemvelo KZN Wildlife staff members who actively participated in the assessment on 23 July 2010 included the park manager, ecoadvice manager, park ecologist, conservation managers for each MU, the district conservation officer and law enforcement staff. Comparing the national and provincial assessments within a framework of best practice In line with the aims of this study, two applications of the same assessment tool to the same protected area afforded an opportunity to compare results and the operating conditions under which each assessment was conducted. This involved objectively comparing the operating conditions of each assessment with best practice, with a view to, (1) understanding which assessment was better aligned with best practice and therefore more credible and (2) making an explicit, general statement of global application regarding a minimum set of standard operating procedures for assessments of this nature in the future. In the quest for objectivity, our basis or framework for adjudicating ‘best practice’ was based on a number of key publications in the field of management effectiveness. Although most often expressed only as ‘recommendations’, these publications provide ‘guidelines’ that inform how each assessment should be undertaken if we are to accept the scores as rigorous and reasonable reflections of protected area management effectiveness. Benchmark examples include: making use of all six components of the IUCN-WCPA management effectiveness evaluation framework (Hockings 2003; Hockings et al. 2006; Leverington & Hockings 2004), adequate consultation and participation (Hockings et al. 2006), applying the assessment at an appropriate level, scale and frequency (Hockings et al. 2006), the need for a credible, non-management assessor (Hockings et al. 2006), spending sufficient time on the assessment to reach a considered judgement (Ervin 2007; WWF & World Bank 2007), appropriate multiple stakeholder involvement (Ervin 2007; Hockings et al. 2006; Williams 2011) and the inclusion of a pressure and threats assessment (Ervin 2003c). Provincial assessment methodology Normalising the scores Not all questions (assessed criteria) were applicable to all MUs. Hence, it was necessary to score these items as ‘non-applicable’ rather than penalise the MU with a score of zero. This meant that the maximum potential score differed across MUs; the final score per MU was then divided by the denominator that had been totalled by subtracting the value of the ‘non-applicable’ items and then multiplying by 100 to achieve a normalised percentage effectiveness score. Pressures and threats assessment Both pressures and threats (as defined in the Online Appendix) were quantified for each of the 11 MUs in the provincial assessment. This entailed a qualitative assessment of up to a maximum of 22 identified pressures and threats. To quantify each relevant pressure and threat, members of staff had to assign a value ranging from 4 (highest) to 1 (lowest) that best reflected the extent, impact and permanence of each identified pressure and threat. The degree of pressure and threat was determined by calculating extent × impact × permanence (Online Appendix); the maximum degree of pressure or threat for each identified pressure or threat was therefore 64 (i.e. 4 × 4 × 4), which was rated as being ‘severe’, whilst the minimum degree of pressure or threat was 1, which was rated as being ‘mild’ (Ervin 2003c). The total degree of pressure and the total degree of threat for each MU was determined by summing all individual pressure and threat scores respectively. The maximum potential score in this regard was 1408 (i.e. 22 pressures or threats × 64), whilst the minimum score was zero (Ervin 2003c). It is important to note that any pressures and threats assessment is perception-based (R. Uys pers. comm., 05 November 2009); each participant possesses different views and outlooks so it was therefore important that sufficient oversight and consistent interpretation was provided in this regard. Correlation exercises Correlation exercises were undertaken which compared quantitative data sets such as budget and staff numbers with management effectiveness to identify significant relationships based on the correlation coefficient (r) approaching a value of one. Results Top ↑ Provincial assessment Comparison of normalised scores per management unit The provincial assessment realised a mean weighted score of 58% effectiveness (Table 1). Most MUs scored between 51% – 60%, followed by the 61% – 70% category (Figure 2). Two MUs scored between 41% – 50% (Figure 2). No MUs scored more than 68% and no MUs scored below 48% (Figure 2). The lowest scoring MU was Manzengwenya (Coastal Forest Reserve), whilst the highest scoring MUs were Eastern Shores and uMkhuze (Figure 2). The effectiveness scores across the MUs were contained within a narrow range (< 20%), with no major outlying scores (Figure 2). TABLE 1: Summary of scores for the national and provincial assessments of the iSimangaliso Wetland Park. FIGURE 2: Ranking of the management effectiveness scores per iSimangaliso Wetland Park management unit (provincial assessment) relative to the national assessment of the iSimangaliso Wetland Park. Pressures and threats Some 22 pressures and/or threats were identified as activities or events that either are, or may in the future, have a detrimental impact on the ecological integrity of the iSWP. The ‘top six’ pressures (most of which are also the most significant threats) are climate change, alien plants, dam building, bush encroachment, poaching and protected area isolation, which collectively accounted for 51% of the pressures and threats experienced in the iSWP (Figure 3). In most instances, the total threat score surpassed the total pressure score, inferring that protected area management anticipated that all pressures will increase into the future if not mitigated (Figure 4). FIGURE 3: The full suite of 22 pressures and threats displayed as a mean value across all 11 management units of the iSimangaliso Wetland Park ranked from highest to lowest mean pressure (paired with mean threat). FIGURE 4: The total pressure and threat scores for the 11 management units of the iSimangaliso Wetland Park ranked from highest to lowest total pressure (paired with total threat). Correlation exercises: Relationship between management effectiveness, resource inputs and pressures and threats The 11 MUs of the iSWP accounted for 319 permanent staff (at an average of 32 permanent staff members per MU) and a total operational budget (excluding payroll) of c. R4 million (at an average of c. R400 000 per MU). Few correlations were statistically significant (i.e. when r > 0.602; Table 2). Management effectiveness was positively correlated to total budget and negatively correlated to total pressure, whilst budget and the number of permanent staff were positively correlated (Table 2). Total pressures and total threats were also positively correlated (Table 2). False Bay has a large staff complement for a relatively small MU, whereas far larger MUs such as Lake Sibaya, St Lucia conservation and Eastern Shores have either a smaller or similar staff establishment when compared with False Bay. False Bay is also well resourced for its size, whereas Ozabeni (in particular) and Western Shores are poorly funded compared to their respective surface areas. TABLE 2: Correlation matrix showing the degree of correlation between management effectiveness and total budget, staff, protected area size, total pressures, and total threats. National assessment The national assessment realised an effectiveness score of 86% (Table 1; Figure 2). No pressures and threats assessment or correlation exercises were undertaken. Parity in results between national and provincial assessments? The outcome in terms of comparing effectiveness scores for the two assessments of the same protected area assessed by the same tool is a difference of 28% effectiveness (Table 1; Figure 2). The score differences were significantly different across all categories of protected area management (Figure 5). It is clearly evident that the two assessments were undertaken under highly contrasting operating conditions, with the provincial assessment far better aligned with best practice than the national assessment (Table 3). TABLE 3: The operating conditions applied to the national and provincial assessments. Each aspect is adjudicated against published best practice guidelines and recommendations. FIGURE 5: Ranking of scores per protected area management category for the iSimangaliso Wetland Park based on the provincial assessment (mean score of 11 management units) and the national assessment (single score). N Discussion Top ↑ Disparity between national and provincial scores In this study, we have demonstrated that two management effectiveness assessments of the same protected area, using the same assessment tool, have achieved two very different results. In detailing the protocols used in each assessment, we have noticed that the only difference relates to how the assessments were undertaken (‘operating conditions’) (Table 3). Given that management effectiveness assessments are essentially perception-based and qualitative in nature, and therefore subject to interpretive bias and subjectivity, the most objective way to determine whether each assessment was providing a fully defensible and objective picture of what’s happening on the ground is to compare each assessment methodology with best practice. After consultation with the literature, it is evident that the provincial assessment is better aligned with best practice than the national assessment, which makes the provincial assessment more credible and hence a more realistic picture of management effectiveness. The provincial assessment involved a quorum of key, relevant staff engaging in robust peer review. Given the diversity of habitats and challenges in the iSWP, the assessment of individual MUs (not the iSWP as a whole) each linked to an operational budget added even further value towards the aim of achieving a credible score. Further exercises that collected budget and staff data, together with a pressures and threats assessment, presented a comprehensive and defensible picture of the state of the iSWP, both in terms of its management effectiveness, management challenges and its state of biodiversity integrity. The provincial assessment should become the baseline for future assessments of the iSWP.From the breakdown of operating conditions, it is clear that the national assessment did not fulfil most of the criteria recognised by best practice as being the essential ingredients required for undertaking assessments of this nature (Table 3). Therefore the score should be interpreted with caution in light of skewed and incomplete stakeholder representation, the perfunctory assessment style that did not encourage peer review or allow the expression of different opinions and perspectives, the coarse park-wide scale at which it was applied and the brief period of time in which it was conducted. This, in all likelihood, explains the high score and the large disparity with the provincial score, which is problematic. The score of 86% effectiveness for the iSWP derived from the national assessment is the highest score across all protected areas assessed under the national programme (Britton 2010). The score is therefore misleading, given that the national report (Britton 2010) concluded by saying that ‘all WHS are soundly managed’. A further concern is that the high score distorts the mean for WHS as a whole; a WHS site such as the uKhahlamba Drakensberg Park, which scored 73% effectiveness and therefore falls below the mean, would probably have been above the mean if the iSWP national score was a more realistic reflection of management effectiveness. Not only does a biased, non peer-reviewed score ‘cloud’ the national average (both for all protected areas and WHS separately), it also sends a false message to the CBD and brings the assessment method into disrepute, which is a concern for both the science and practice of protected area management effectiveness. The need for robust and standardised operating procedures In order to obtain an objective, robust score, it is clear from a comparison of these two assessments applied under contrasting operating conditions that a basic set of standard operating procedures (SOPs) should be agreed to and used in all assessments. These should be viewed as ‘non-negotiable’ and not merely as ‘guidelines’ or ‘recommendations’, as the latter invokes a sense of being negotiable. With these ‘ground rules’ in place, we believe that the results of any assessment will be more credible and robust. In the subsections below, we propose three essential ingredients to be included as SOPs. ‘The right players’: The careful selection of a broad range of relevant staff The accuracy of the score is dependent on identifying the right people and making sure they are present on the assessment day. It is essential to communicate the need for such staff involvement in timely fashion prior to the assessment to ensure that all participate. The assessment tool comprises a broad range of assessment criteria, with no single individual best placed to answer all of the questions with 100% certainty. It is therefore essential to encourage the participation of a range of relevant staff members, each of whom brings a level of expertise to the assessment table. The failure to do so will result in a ‘false assessment’ and hence a ‘false assessment score’. ‘Let them speak’: Fostering the peer review dynamic The review dynamic involving multiple parties is essential to achieving a fair score. Allowing participants the time and space to debate each question will help eliminate any bias, false perceptions or prejudice inherent in such assessments. We have noticed that field staff members tend to be so closely involved with day-to-day activities that they lose objectivity, or tend to be too negative and score low. Senior management come with a more strategic viewpoint and, in the absence of the day-to-day realities, tend to score too high. Hence the need to encourage a range of viewpoints and opinions and facilitate dialogue until a consensus score is reached. ‘Do not rush’: Allow for sufficient time and a sufficient level of detail The facilitator should not feel under any pressure or obligation to rush to the next question; rather, the facilitator should only move on to the next question when all participants have had sufficient time to comment and then agree on the score. The assessments should be applied at a spatial scale that is meaningful and should provide sufficient time to encourage peer review, as well as to accommodate the required detail in terms of spatial scale. The added value of a pressure and threats assessment The suite of major pressures and threats identified for the iSWP is comparable with those identified in Carbutt and Goodman (2010) for KwaZulu-Natal Province as a whole, with the notable exception of dam building identified in this analysis, most likely because the iSWP is primarily a hydrologically driven system and the impacts of dam building will have far-reaching consequences. Given the significant negative correlation between management effectiveness and the pressures being experienced by the iSWP, special attention should be directed towards mitigating these numerous high pressures, particularly given their negative impact on biodiversity target achievement. The alien plants pressure and threat should continue to receive attention and measures should be put in place to educate people on the topical issue of climate change. Conclusion Top ↑ Management effectiveness assessments should not be seen merely as a ‘paper exercise’ to meet reporting obligations. Rather, they should be undertaken objectively and with sober judgement and diligence to ensure that the effectiveness score achieved represents a realistic picture of management practices and processes, in the absence of hard quantitative data. As protected area management effectiveness assessments are, by their very nature, open to abuse and one-sided, narrow points of view, it is therefore critical that strict SOPs and conditions are imposed during the undertaking of such assessments. This will result in a credible and robust rapid assessment methodology that generates objective and defendable data within acceptable limits and standards. Furthermore, for protected areas such as the iSWP where the complex institutional arrangements involve a de facto conservation management agency and a separate overarching authority, one workshop with full representation of all actors and stakeholders is the most appropriate way to assess the iSWP that, in doing so, also ensures good cooperative governance. We have noticed that a clear, empathic and absolute statement on how to best apply the various assessment tools is lacking, because most publications address best practice methodology only in terms of ‘guidelines’ or ‘recommendations’ (see Hockings et al. 2000; Hockings et al. 2006; Leverington et al. 2008; Leverington et al. 2010; Williams 2011; WWF & World Bank 2007). Such guidelines should be elevated in practice to non-negotiable SOPs that provide rigour and credibility to the practice of management effectiveness, including but not limited to, (1) full stakeholder representation of all relevant and knowledgeable actors, (2) an environment that caters for the expression of different perceptions and interpretations, moderated by peer review under the direction of an able chairperson who is not involved in protected area management and (3) the application of the tool at a spatial resolution that is biologically meaningful and with sufficient allocation of time to do so. Anything less will bring the application of management effectiveness assessment methodologies into disrepute and open them up to abuse. Finally, an additional pressure and threats assessment will add further value to the monitoring, evaluation and mitigation components of the adaptive management framework. Acknowledgements Top ↑ The Ezemvelo KZN Wildlife PAME working group is thanked for guiding the implementation of the programme in KwaZulu-Natal. Heidi Snyman, Ezemvelo KZN Wildlife cartographer, is thanked for producing the map of the iSWP. We also thank the two anonymous reviewers for helpful comments and suggestions that have significantly improved the quality of the publication. Competing interests The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article. Authors’ contributions C.C. (Ezemvelo KZN Wildlife), author of article and co-ordinator of management effectiveness assessments, collated and analysed all results. P.S.G. (Conservation Solutions) provided guidance and oversight throughout the assessments and provided the title and abstract. References Top ↑ Balmford, A., Leader-Williams, N. & Green, J.B., 1995, ‘Parks or arks: Where to conserve large threatened mammals’, Biodiversity and Conservation 4, 595–607. http://dx.doi.org/10.1007/BF00222516 Bertzky, B., Corrigan, C., Kemsey, J., Kenney, S., Ravilious, C., Besançon, C. et al., 2012, Protected planet report 2012: Tracking progress towards global targets for protected areas, IUCN and UNEP-WCMC, Cambridge. Britton, P., 2010, ‘A report on the application of the METT-SA version 1 (2008) to terrestrial protected areas managed at national and provincial level in South Africa’, Unpublished report for the Department of Environmental Affairs, Pretoria. Carbutt, C. & Goodman, P.S., 2010, ‘Assessing the management effectiveness of state-owned, land-based protected areas in KwaZulu-Natal’, Ezemvelo KZN Wildlife unpublished report, Pietermaritzburg. Convention on Biological Diversity, 2010, Programme of work on protected areas, viewed 13 January 2010, from http://www.cbd.int/protected/pow/ Department of Environmental Affairs, 2009, Workshop report on the ‘Expert input on protected areas study’, Department of Environmental Affairs, Pretoria. Dudley, N., Mulongoy, K.J., Cohen, S., Stolton, S., Barber, C.V. & Gidda, S.B., 2005, Towards effective protected area systems: An action guide to implement the Convention on Biological Diversity programme of work on protected areas, Technical Series 18, pp. 1–95, CBD, Montreal. Dudley, N., Parrish, J.D., Redford, K.H. & Stolton, S., 2010, ‘The revised IUCN protected area management categories: The debate and ways forward’, Oryx 44(4), 485–490. http://dx.doi.org/10.1017/S0030605310000566 Ervin, J., 2003a, ‘Protected area assessments in perspective’, BioScience 53(9), 819–822. http://dx.doi.org/10.1641/0006-3568(2003)053[0819:PAAIP]2.0.CO;2 Ervin, J., 2003b, ‘Rapid assessment of protected area management effectiveness in four countries’, BioScience 53(9), 833–841. http://dx.doi.org/10.1641/0006-3568(2003)053%5b0833:RAOPAM%5d2.0.CO;2 Ervin, J., 2003c, WWF: Rapid assessment and prioritization of protected area management (RAPPAM) methodology, WWF Forests for Life Programme, Gland. Ervin, J., 2007, Assessing protected area management effectiveness: A quick guide for protected area practitioners, pp. 1–17, The Nature Conservancy and WWF, Arlington. Goodman, P.S., 2002, A rapid assessment of terrestrial protected area management effectiveness in KwaZulu-Natal, South Africa, pp. 1–30, Ezemvelo KZN Wildlife and WWF, Pietermaritzburg. PMid:12555798 Goodman, P.S., 2003a, South Africa: Management effectiveness assessment of protected areas in KwaZulu-Natal using WWF’s RAPPAM methodology, pp. 1–29, WWF International, Gland. Goodman, P.S., 2003b, ‘Assessing management effectiveness and setting priorities in protected areas in KwaZulu-Natal’, BioScience 53(9), 843–850. http://dx.doi.org/10.1641/0006-3568(2003)053%5b0843:AMEASP%5d2.0.CO;2 Hockings, M., 2003, ‘Systems for assessing the effectiveness of management in protected areas’, BioScience 53(9), 823–832. http://dx.doi.org/10.1641/0006-3568(2003)053%5b0823:SFATEO%5d2.0.CO;2 Hockings, M., Stolton, S. & Dudley, N., 2000, Evaluating effectiveness: A framework for assessing the management of protected areas, WCPA Best Practice Protected Area Guideline Series 6, IUCN, Gland. Hockings, M., Stolton, S., Leverington, F., Dudley, N. & Courrau, J., 2006, Evaluating effectiveness: A framework for assessing management effectiveness of protected areas, WCPA Best Practice Protected Area Guidelines Series 14, 2nd edn., IUCN, Gland. International Union for Conservation of Nature, 2004, IUCN 5th World Parks Congress, Durban, South Africa, 08–17 September, viewed 13 January 2010, from a summary within http://cmsdata.iucn.org/downloads/iucn_wpc_2014_discussion_paper_13_5_11.pdf Leverington, F. & Hockings, M., 2004, ‘Evaluating the effectiveness of protected area management: The challenge of change’, in C.V. Barber, K.R. Miller & M. Boness (eds.), Securing protected areas in the face of global change: Issues and strategies, pp. 169–224, IUCN, Gland. Leverington, F., Hockings, M. & Costa, K.L., 2008, Management effectiveness evaluation in protected areas – A global study. Report for the project ‘Global study into management effectiveness evaluation of protected areas’, p. 170, The University of Queensland, Gatton. Leverington, F., Costa, K.L., Courrau, J., Pavese, H., Nolte, C., Marr, M. et al., 2010, Management effectiveness evaluation in protected areas – A global study, 2nd edn., pp. 1–101, The University of Queensland, Brisbane. McNeely, J.A., Harrison, J. & Dingwall, P., 1994, Protecting nature: Regional reviews of protected areas, IUCN, Gland. Parrish, J.D., Braun, D.P. & Unnasch, R.S., 2003, ‘Are we conserving what we say we are? Measuring ecological integrity within Protected Areas’, BioScience 53(9), 851–860. http://dx.doi.org/10.1641/0006-3568(2003)053[0851:AWCWWS]2.0.CO;2 Tunley, K., 2009, State of management of South Africa’s marine protected areas, Report Series – 2009/Marine/001, WWF South Africa, Cape Town. Williams, B.K., 2011, ‘Adaptive management of natural resources – Framework and issues’, Journal of Environmental Management 92, 1346–1353. http://dx.doi.org/10.1016/j.jenvman.2010.10.041, PMid:21075505 World Wildlife Fund & World Bank, 2007, Management effectiveness tracking tool: Reporting progress at protected area sites, 2nd edn., WWF International, Gland.