New Zealand Journal of Forestry Science Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 https://doi.org/10.33494/nzjfs492019x26x E-ISSN: 1179-5395 published on-line: 04/03/2019 Detection of fallen logs from high-resolution UAV Images D. Panagiotidis*, Azadeh Abdollahnejad, Peter Surový, Karel Kuželka Department of Forest Management, Faculty of Forestry and Wood Sciences, Czech University of Life Sciences (CULS), Kamýcká 129, Prague 165 21, Czech Republic *corresponding author: panagiotidis@fld.czu.cz (Received for publication 1 October 2018; accepted in revised form 15 February 2019) Abstract Background: High-resolution images from unmanned aerial vehicles (UAVs) can be used to describe the state of forests at regular time periods in a cost-effective manner. The purpose of this study was to assess the performance of a line template matching algorithm, the Hough transformation, for detecting fallen logs from UAV-based high-resolution RGB images. The suggested methodology does not aim to replace any known aerial method for log detection, rather it is more oriented to the detection of fallen logs in open forest stands with a high percentage of log visibility and straightness. Methods: This study describes a line template matching algorithm that can be used for the detection of fallen logs in an automated process. The detection technique was based on object-based image analysis, using both pixel-based and shape descriptors. To determine the actual number of fallen logs, and to compare with the ones predicted by the algorithm, manual visual assessment was used based on six high-resolution orthorectified images. To evaluate if a line matched, we used a voting scheme. The total number of detected fallen logs compared with the actual number of fallen logs based on several accuracy metrics. To evaluate predictive models, we tested the cross-validation mean error. Finally, to test how close our results were to chance, we used the Cohen`s Kappa coefficient. Results: The detection algorithm found 136 linear objects, of which 92 of them were detected as fallen logs. From the 92 detected fallen logs, 86 were correctly predicted by the algorithm and 24 were falsely detected as non-fallen* logs. The calculated amount of observed agreement (po) was equal to 0.78, whereas the expected agreement by chance (pe) was 0.61. Finally, the kappa statistic was 0.44. Conclusions: Our methodology had high reliability for detecting fallen logs based on total user‘s accuracy (94.9%), whereas a Kappa of 0.44 indicated there was good agreement between the observed and predicted values. Also, the cross- validation analysis denoted the efficiency of the proposed method with an average error of 16%. Keywords: unmanned aerial vehicle; forest; windthrow; computer vision; pattern recognition; Hough transformation algorithm © The Author(s). 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Research Article Open Access Introduction Wind damage is a serious threat to managed forests because it can reduce landscape quality, timber yields, and wildlife habitat. Windthrow is one of the most significant disturbances in Norway spruce (Picea abies (L.) H.Karst.) forests in Central Europe, mainly because the root system is sensitive to any dislocation of its primary taproot (Puche 2003). Damage can range from small-scale gaps in forests to catastrophic landscape disturbances (Kuuluvainen 1994), and it varies both spatially and temporally. According to the Czech National Forest Inventory *author correction 20/09/2019 Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 2 (NFI) from 2001 to 2004, Norway spruce comprised 47.7% of forest cover area (ÚHÚL 2007); it is the most widespread tree species within the Czech Republic. Norway spruce has fast growth and is relatively easy to manage, thus many forest owners have favored spruce monocultures, and this has greatly altered the natural forest ecosystems of Central Europe. Over the past century, the rapid change in the ratio of autochthonous to non-autochthonous species composition (Böhm 1981) and the widespread establishment of unstable spruce monocultures has gradually led to the loss of silvicultural flexibility, and many areas have become more susceptible to biotic and abiotic damages (Klimo et al. 2000). In addition, wind damage to forest stands is expected to increase in the future because of the risks from extreme and unpredictable weather conditions related to climate change (Seidl et al. 2017). Damage assessment after windthrow disturbances is an important component of forest management and ecological monitoring. Fallen logs are typically assessed by field surveys (Ståhl et al. 2001), but they present some unique challenges. Field surveys for log detection are usually time-consuming and labor intensive (Bütler & Schlaepfer 2004); for example, it is expensive to assess the spatial distribution of fallen logs over a wide area. Also, because fallen logs are typically on or near the ground, they can be obscured by understory vegetation, moss, or litter, which can adversely affect data integrity (Rondeux & Sanchez 2010). Inaccessible forests and areas with restrictions, such as national parks, also pose additional challenges. One method to reduce costs and provide practical solutions for forest applications is to use remote sensing technologies, such as airborne and satellite imagery (e.g. Bütler & Schlaepfer 2004; Pasher & King 2009). Currently, there are three extraction methods from remote sensing techniques: (a) visual interpretation; (b) individual windthrown tree extraction; and (c) area extraction. However, all three methods require: (i) interpretation consistency; and (ii) trained and experienced staff. Remote visual interpretation is a relatively simple method that has high accuracy. Fransson et al. (2007) proposed a method based on visual interpretation over simulated windthrown forests, at both stand and individual tree levels, using synthetic aperture radar (SAR) images from the Swedish airborne CARABAS- II and LORA systems after a severe storm. Wang et al. (2010) studied the detection of fallen logs by comparing before and after images of a hurricane disaster using standard aerial images (altitude of 4–5 km) and satellite images. In recent years though, many authors have proposed automated approaches for the extraction of windthrown trees. Szantoi et al. (2012) used a Sobel method for edge detection combined with spectral information based on color filtering. For detecting the fallen logs, they considered 15 statistical combinations of spectral bands using high-resolution digital photos obtained with a Leica airborne digital sensor (ADS40); they used manned aircraft at two different flying heights (1,500 and 3,000 m). Although they could identify areas of high tree densities, it was difficult to achieve extraction of individual windthrown trees from standard aerial photographs and satellite images due to obstruction and resolution issues. Unmanned aerial vehicles (UAVs) deployed with light detection and ranging (LiDAR) sensors is a modern automated approach to collect data for the construction of 3D forest maps. The ability of the laser to penetrate the canopy allows for the creation of a digital terrain model (DTM), even in the case of dense forests (Maguya et al. 2014), and it can define geometric objects (i.e. linear and circular) located near the ground based on an area extraction approach (Bailly et al. 2008). Unlike other aerial surveys, LiDAR was an effective tool for individual windthrown tree extraction because it is not subject to the issue of canopy obstruction (Tran et al. 2015; Niemi & Vauhkonen 2016). Blanchard et al. (2011) tried to extract individual fallen logs using an object- oriented image analysis approach with airborne laser scanner (ALS) data. Their study showed a completeness of 73%; however, because of the over-division, they were not able to achieve complete delineation of fallen trees (Blanchard et al. 2011). Even though laser scanner-based systems produce high-quality data, they are still quite expensive. Lindberg et al. (2013) and Hu & Yuan (2016) used ALS data to perform binary classifications, based on height characteristics to eliminate the interference of foreign objects under a closed canopy. However, similar to the object height model (OHM), this approach still suffered from interference problems caused by objects with shapes similar to fallen trees. Both techniques were based on the template matching method, with a reported correctness of 32% and 38% at the individual tree level. Mücke et al. (2013) provided an area extraction method using full-waveform ALS data for the extraction of fallen logs based on area-perimeter ratios and OHM; they achieved a log completeness of 75.6% with an accuracy of 89.9%. In another study, Mokroš et al. (2017) tried to combine unmanned aerial system (UAS) imagery with ALS, to determine the size of windthrown areas within forest stands. Their results showed that the windthrown areas were successfully identified using both the UAS and UAS/ALS techniques, and they performed substantially better than Landsat. Although, the results from UAS overestimated the volume provided by the field measurements, the difference between them was only 4.93%. Consumer-grade UAVs equipped with imaging sensors represent a more efficient tool to acquire highly detailed and spatially continuous 3D data, and they provide a low-cost alternative, which is a core challenge of forest attribute data acquisition at the individual tree level (Panagiotidis et al. 2017; Abdollahnejad et al. 2018; Surový et al. 2018). UAVs, in combination with photogrammetric methods, have also been used for pre-harvest and disturbance assessments. Because UAV images are typically of high-resolution, they also represent new potential to recognize fallen logs (Inoue et al. 2014), and detect coniferous seedlings (Feduck et al. 2018) and tree stumps (Puliti et al. 2018). However, only one study has exclusively focused on the prediction of fallen logs at the individual tree level based on UAV imagery (Duan et al. 2017). Although they achieved acceptable extraction accuracy, the flying height was at 500 m above the ground. In general, increasing the altitude will increase the image overlap, but the spatial resolution (as a function of flying height) will be lower. Nevertheless, whether using laser or camera sensors, the analyses and processing for fallen log detection in forested areas rely on similar approaches, which may be conducted using different methods, such as rasterization of point clouds using a random sample consensus (RANSAC) algorithm, image processing (Tittmann et al. 2011), and object-based image analysis (Blanchard et al. 2011). Alternatively, we propose the Hough transformation algorithm (Hough 1962), which is a robust feature extraction technique used in image analysis and computer vision; it can be directly applied to a binary image to detect multiple objects of interest after “pixel filtering” (Duan et al. 2017). The principle of the algorithm lies in the image transformation and the use of non-maxima suppression to locate and distinguish peaks in Hough images. Such post-processing might require tuning of extra parameters, especially in the case of higher image complexity when objects of interest are difficult to distinguish, either because they are close to each other or because they are obscured by noisy pixels. The Hough transformation has several advantages: (a) it is robust to partial or slightly deformed shapes; (b) it is tolerant of noise; (c) it can find multiple occurrences of a shape during the same processing pass; and (d) it is robust to the presence of additional structures in the image (Hough 1962). Its greatest strength lies in specialised vision, such as manufacturing quality control and analysis of aerial photographs (Antolovic 2008). Unfortunately, the computational load increases rapidly with the greater number of parameters that define the detected shape; for example, lines have two parameters, circles three, and so on. Consequently, the Hough algorithm can be used to detect any linear, circular, and ellipsoid objects, such as fallen logs, tree rings (Aschoff & Spiecker 2004), and stem cross-sections (Olofsson et al. 2014). The aim of this study was to examine the feasibility and performance of a line template matching algorithm (Hough transformation) for fallen log detection from UAV-based high-resolution RGB images under open canopy conditions. To evaluate the accuracy of the method, several accuracy metrics were used, including the total number of detected fallen logs compared with the actual number of fallen logs, and we tested the cross- validation mean error. Methods The study area (Fig. 1) is located in West Bohemia, and it is largely surrounded by dense forests. The altitude is approximately 600 m a.s.l. and the topography is generally flat. We selected six randomly distributed experimental plots (50 × 50 m) with open canopy cover as shown in Additional file: Figure S1. The most common tree species were Norway spruce and Scots pine (Pinus sylvestris L.), with scattered individuals of deciduous species, such as Norway maple (Acer platanoeides L.) and birch (Betula spp.); Norway spruce was significantly more abundant than the other species. The forested area extends geographically from 50°1104200N, 13°1601200E to 50°0903100N, 13°1303000E, and we used WGS84 as the coordinate reference system (Fig. 1). Image acquisition A commercial UAV quadcopter DJI Mavic Pro (Dá-Jiāng Innovations Science and Technology Co. Ltd., Shenzhen, China) with 12 MP resolution was used for the imagery data collection; it was powered by the compact onboard gimbal, which ensured crisper and cleaner image acquisition. The copter was guided with the help of DJI GO Version V. 4.1.9 software (Dá-Jiāng Innovations Science and Technology Co. Ltd., Shenzhen, China) with a planned route at a height of 60 m at a speed of 2 m/s. The camera was set to automatic mode with a time lapse of 3 s. The route was uploaded to the driving unit of the copter using an iPad tablet (Apple Inc., California, United States), though for security reasons, the take- off and landing of the copter were guided by manual radio control. The flight trajectory followed a double zig-zag pattern, which consists of two zig-zag flights in perpendicular directions. Overlap of photos at the front was 80% and to the sides it was 70%. The overlaps were calculated using two applications, including the Pix4DCapture V. 4.2.0 software (Pix4D S.A, Lausanne, Switzerland), which allows automatic picture taking at a predefined distance, in our case it was 10 metres, and the DJI GO with a built- Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 3 FIGURE 1: Study area in the Czech Republic. Source: http:// download.geofabrik.de/europe/czech-republic. html for a more detailed map (ArcMap V.10.1). in intervalometer, which takes a photo every 5 s, and based on the selected travel speed (2 m/s), the distance would be 10 or 15 metres. Given the approximate size of the photo at ground level, this would correspond to 70 to 80% overlap. The quadcopter needed approximately 6 to 7 min to complete a flight based on the predefined parameters (i.e. number of waypoints and flight speed). We performed six flights in total, with one flight per plot using the same flight parameters. The total number of images acquired per flight was approximately 250 to 300. The flights were conducted on November 11, 2017 at 11:00 – 12:00 local time, while the weather conditions were favorable for photographing. Data processing Collected imagery data were processed in Photoscan Professional V. 1.4.2 software (Agisoft LCC, St. Petersburg, Russia). To increase the accuracy of the alignment process and to generate high-resolution orthomosaics (2.68 cm/pixel) from the 3D point clouds, accuracy was set to high quality. To geo-referencing the images, they were orthorectified based on their GPS (DJI Mavic Pro) coordinates using a built-in rectification algorithm in Photoscan. No ground control points (GCPs) were used to improve the final positional accuracy of the models (Tomaštík et al. 2017; Rangel et al. 2018) due to the inability to access the plots. Manual visual assessment To determine the actual number of fallen logs used for the reference data, we performed a manual visual assessment to identify individual fallen logs from the digital orthophotos; we marked the beginning and end of each fallen log in each plot using Adobe Photoshop V. 19.1.1 software (Adobe Systems Incorporated, California, United States). The total number of fallen logs was 110 across all six plots. Pre-processing of raw image data Usually, fallen logs display elongate geometry and a bright spectrum due to lateral optical scattering, which makes them observably different from other objects. The technique used for the interpretation of UAV images to identify fallen logs was based on object-based image analysis using both spectral and shape characteristics (Jones & Purves 2008). Because of the ambiguity problem of the range measurement and the degree of forest structural complexity in some of the plots, preliminary work was done to improve and prepare the images for further processing. The ‘imread’ function was initially used to read the image, then the RGB image was converted to binary using the ‘imbinarize’ function. To better identify line patterns in each plot, an image thresholding was applied based on the reflectance of defined spectral log values; however, we did encounter a few problems in this phase. The edges of the objects were often comprised of noisy pixels due to extreme changes in pixel intensity compared to the neighboring pixels, and also, in some cases, slender logs had partial interference from living trees, such as branches and leaves. To overcome these issues and to produce smoother line patterns, we used a step filtering process that included different types of mathematical morphological operations. We used the ‘lagmatrix’ for smoothing the line patterns and the ‘bwareaopen’ tool was used to remove small objects with an area smaller than a particular number of pixels; we also used ‘bwareafilt’ to retain only those pixels contained within the largest areas, which was based on the number of pixels. To address the problem of slender logs, we used additional functions such as ‘imdilate’ to dilate the lines and ‘imclose’ to perform a close operation based on a line structuring element of the number of desired neighboring pixels. The parameters used for each of the above morphological tools are described in Additional file: Table S1. As a final step, edge detection (Ziou & Tabbone 1998) using the Canny approach (Canny 1986) was applied to determine the boundaries of line objects (Fig. 2). The Canny approach was preferred because it ensures optimal detection performance of linear objects and it dramatically reduces the amount of data to be processed. The Canny edge detection process has the following attributes: (i) application of a Gaussian filter for smoothing the images (noise removal); (ii) finding the intensity gradients in each image; (iii) application of non-maximum suppression to get rid of spurious responses to edge detection; (iv) application of a double threshold to determine potential edges; and (v) it finalises the detection of edges by suppressing all the other weak edges that are not connected to strong edges. The whole pre-processing phase was conducted in MatlabR2017b professional edition (MathWorks©, Inc., Massachusetts, United States). An overview of the workflow can be seen in Fig. 3. Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 4 FIGURE 2: Illustration of log segmentation using the edge detection technique (Plot 1) Principle of the Hough algorithm Fallen logs have evident linear characteristics, therefore Hough transformation was adopted to extract the fallen logs from the binary images (Hough 1962). For this purpose, we used the following mathematical model (Equation 1), as suggested by Duda & Hart (1972): ρ = s1∗ cos(θ) + s2 ∗ sin(θ) (1) where the parameter ρ is the distance from the origin to the closest point on the straight line and θ is the angle between the x-axis and the line connecting the origin with the closest point (Fig. 4b). The Hough transformation algorithm (Ye et al. 2015; Mukhopadhyay & Chaudhuri 2015) was used to detect all the line patterns in each image based on the decision from the edge detector, and it represented them in the form of a 2D-array, named parameter space. The parameter space is a graphical representation of an image, as evident in the example of Fig. 4, that was used to better illustrate the entire concept of the Hough algorithm. The use of the Hough algorithm for log detection is quite simple and it is based on all the points that the edge detector indicated as an edge in the binary images. Note that every point that the edge detector found, voted for the possibilities of having a line. For example, from point P1 (Fig. 4a), a defined number of lines (theoretically infinite) will pass through it and each line will have a particular distance (ρ) and a particular angle (θ; Fig. 4a). The process will continue for another line that will pass from the same point and which will also have its own (ρ, θ). Consequently, any line that will pass through that point, will eventually get one vote (principle of voting scheme). Following the exact same process for another point, P2, (Fig. 4a, b), we will get one more vote, and so on. However, there will be a line (Fig. 4a, b) where both points P1 and P2 are located and which will get two votes, as a common line between the two points. The process of voting continues for the rest of the points until it identifies the rest of the lines of interest in the binary images. Log detection The rule for log detection is that when there are multiple points on the same line, it will receive multiple votes. To calculate the total number of these multiple votes, we created the accumulator, which is a new transformed coordinate system array (ρ, θ) (Fig. 4c, d). For every point (e.g. P1, P2 etc.) we used their coordinates and for different (θ) values in Equation 1, and we were able to define the position of (ρ) inside the accumulator. Consequently, each point formed its own sinusoidal curve in the parameterised space. As long as the process continued, sinusoidal curves started to accumulate; when the process was finished, we looked for the maxima (highest number of votes), which indicated the line we were looking for (single fallen log). Finally, to find and extract the entire number of fallen logs in the image, the ‘Houglines’ function was applied using a loop method. Every time the loop was running, Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 5 FIGURE 3: Flow diagram of proposed automated fallen log detection process FIGURE 3: Flow diagram of proposed automated fallen log detection process FIGURE 4: Example of points mapping P1 and P2 from Cartesian space (a); to the slope-intercept parameter space (c); mapping P1 and P2 from Cartesian space (b); and to the (ρ, θ) parameter space (d). Source: Liu et al. (2017). it identified and plotted a single line. The Houghline property contains the coordinates of the points (P1 (s1,s2), P2 (s1,s2)) in the form of Point (1) and Point (2), which represent the beginning and end of each line or log. The entire process repeated until it reached the desired number of points based on the ‘Houghpeaks’ function, and the output of this process was then stored in a 2 × 2 matrix. For the entire analysis (pre-processing and processing) we have created two separate .mat files using the Econometrics toolbox in MatlabR2017b. Evaluation of accuracy and error assessment The accuracy of the detected and actual fallen logs was evaluated for each plot using the following accuracy metrics: Overall accuracy (%) = CD/MVA * 100 (2) where CD stands for correct detected and MVA represents the manual visual assessment of the number of fallen logs. User's accuracy (%) = CD/DFL * 100 (3) where DFL stands for detected as fallen log. Because all six plots had relatively low structural complexity (i.e., open-canopy, few piled logs, and low percentage of ground cover), it was possible to visually investigate the user‘s accuracy based on the digital orthophotos. Commission error (%) = 100 − User's accuracy (%) (4) Due to the small number of sample plots, it was not possible to divide them into train and test groups. Also, the division of classes based on individual trees and random selection of them as train and test groups by the algorithm cannot guarantee the independence of train and test samples because the algorithm can randomly select train and test samples from the same plot. Therefore, we used cross- validation technique to evaluate predictive models based on the leave-one-out cross-validation type as follows: CVn= 1/n ((yi− ŷi)/(1 − hi )) 2 (5) where CV stands for cross-validation, n is the number of samples, yi is the observed quantitative output, ŷi is the predicted values, and hi is leverage. Additionally, to test how far or close to chance the result was, we used the Cohen's Kappa coefficient as follows: k=(po − pe )/(1 − pe ) (6) where po is the observed agreement and pe is the expected agreement by chance. Results To test the detection performance and demonstrate some of the strengths and weaknesses of the Hough algorithm, we sampled plots with different spatial patterns of fallen log distributions. To visualise the results in each plot, the binary images were used as background. Binary images have the practical advantage of contrast, especially in images with increased complexity (e.g. forest stands) in small scale Figures. In Plot 1 the algorithm detected 15 of the 22 fallen logs (only 16 of the fallen logs could be displayed in the binary image); however, the correct number of detected fallen logs was 15. The overall accuracy was 68.2%, user‘s accuracy was 100%, and the commission error was 0% (Fig. 5a; Table 1). For Plot 2, all 10 of the 10 fallen logs were automatically detected, thus resulting in an overall accuracy of 100%, user‘s accuracy of 100%, and a commission error of 0% (Fig. 5b; Table 1). Plot 3 had the greatest number of fallen logs. We recorded a total of 41 fallen logs, of which 30 were automatically detected. However, the correct number of detected fallen logs was 28; the algorithm produced additional lines for two different fallen logs. The estimated overall accuracy for this plot was 68.3%, user‘s accuracy was 93.3%, and the commission error was 6.7% (Fig. 6; Table 1). Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 6 ∑ 1 n=1 FIGURE 5: White lines represent the fallen logs and green lines represent the detected fallen logs in (a) Plot 1; and (b) in Plot 2 Plot 4 also had 100% accuracy, as the algorithm detected all five of the five (actual fallen logs) fallen logs, thus resulting in an overall accuracy of 100%, user's accuracy of 100%, and a commission error of 0% (Table 1; Fig. 7a). In Plot 6, the actual number of fallen logs was equal to 22, however, the algorithm detected 23 fallen logs. Nevertheless, the correct number of detected fallen logs was 20 because the algorithm produced additional lines for three fallen logs, thus resulting in an overall accuracy of 90.9%, user‘s accuracy of 87%, and a commission error of 13% respectively (Fig. 7b; Table 1). Finally, for Plot 5, the total number of actual fallen logs was 10, of which nine fallen logs were automatically detected. Similar to Plots 3 and 6, the algorithm produced two additional lines from a single fallen log. As a result, the correct number of detected fallen logs was eight, thus resulting in an overall accuracy of 80%, user‘s accuracy of 88.9%, and a commission error of 11.1% (Fig. 8; Table 1). The cross-validation technique indicated that the applied methodology could detect fallen logs with a mean error of 16%. During the process, the algorithm found 136 linear objects, of which 92 of them were detected as fallen logs. From the 92 detected fallen logs, 86 were Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 7 correctly predicted by the algorithm and 24 were falsely detected as non-fallen* logs (Table 2). The calculated amount of observed agreement (po) was equal to 0.78, whereas the expected agreement by chance (pe) was 0.61. Finally, the Kappa statistic was 0.44. FIGURE 6: White lines represent the fallen logs and green lines represent the detected fallen logs for Plot 3 FIGURE 7: White lines represent the fallen logs and green lines represent the detected fallen logs in: (a) Plot 4; and (b) in Plot 6 Plot Number MVAa Detected as Fallen Loga Correct Detecteda Overall Accuracy (%) User’s Accuracy (%) Commission Error (%) 1 22 15 15 68.2 100 0 2 10 10 10 100 100 0 3 41 30 28 68.3 93.3 6.7 4 5 5 5 100 100 0 5 10 9 8 80 88.9 11.1 6 22 23 20 90.9 87 13 Total 110 92 86 84.6 94.9 5.1 TABLE 1: Statistical summary table for all six research plots including their total accuracies a Manual Visual Assessment (MVA), Detected as Fallen Log, and Correct Detected represent the number of individual fallen logs. *author correction 20/09/2019 Discussion Different compositions of Norway spruce and other species presented in this study, allowed us to Using a low- cost UAV, we tested the efficiency of the Hough algorithm for the detection of fallen logs using high-resolution orthorectified images of forest sites dominated by Norway spruces in the Czech Republic. The proposed algorithm exhibited good detection rates based on both spectral and shape descriptors; we were able to support the statistical results with visual image interpretation from the digital orthophotos (Fig. 5-8). Although we were unable to use GCPs in the research plots, which might have improved the positional accuracy of the final models (orthomosaics), the geo-referencing process is generally not fully correlated with the accuracy of local coordinates (Elatawneh et al. 2014). It might help improve the accuracy, but from a practical perspective, the GCPs installation would be senseless for forest practitioners, especially in fallen log areas that are usually inaccessible for humans and the importance of quickly evaluating the damage is crucial. Because fallen logs are generally defined as being > 2.5 cm in diameter (Harmon et al. 1986), we assumed that the high spatial resolution of the aerial photos would allow us to detect even the smaller fallen logs. Satisfactory detection was observed in the case of partially occluded fallen logs (Plots 2, 6; Fig. 5b, 7b), where the use of shape filtering functions made it possible to unify particular log sections. However, the detection was difficult in cases where: (a) the fallen logs were either too close to each other; either the perpendicular distance between two or more parallel fallen logs was too small or the lines were about FIGURE 8: White lines represent the fallen logs and green lines represent the detected fallen logs for Plot 5 TABLE 2: Confusion matrix to intersect at some point, thus forming acute angles, which produced duplicated lines (Plots 3, 6, 5; Fig. 6, 7b, 8). (b) The fallen logs were not clearly distinguishable, either as a result of occlusion (i.e., other logs, branches, leaves, etc.), or because of the shadowing effect from the neighboring standing trees, which attributed darker tones to some of the fallen logs, as in the case of Plot 1 (Fig. 5a) where six actual fallen logs did not appear. In other cases (Plot 5; Fig. 8), although we tried to give weight to the log values, we were unable to optimise the detection without having adverse effects on the detection of other linear objects. In fact, it was difficult to define an appropriate threshold value that would ensure that the occluded parts could be “filled” to form a uniform log without any intervention (i.e., merging) with the neighboring pixels as a part of another line of interest. (c) The fallen logs were slender; in general, a high percentage of log completeness is mainly associated with larger diameter trees (Blanchard et al. 2011). In our case, some of the fallen logs appeared discreetly in the binary image due to their size (usually slender), which resulted in insufficient intensity of pixel values such as in the case of Plot 3 (Fig. 6), and we assumed that these fallen logs should be smaller than 30 cm in diameter at breast height (DBH). Because of the difficulty in distinguishing them as complete linear objects, some of them remained undetectable by the algorithm. That is consistent with the findings of Nyström et al. (2014), who reported only 43% completeness for fallen logs with DBH < 30 cm, and Inoue et al. 2014, where they were able to identify approximately 80% to 90% of fallen logs that were > 30 cm in DBH, but they failed to distinguish many of the fallen logs that were narrower or shorter. Another limitation was related to the percentage of log curvature; whenever there was a visible abrupt change in curvature along the log, as in the case of Plots 2 and 4 (Figs. 5b, 7a, respectively), where the algorithm was unable to continue its process and detect the rest of the fallen log. Due to the high percentage of user‘s accuracy in all research plots (Table 1), we concluded that the proposed methodology produced an output similar to the reference data, and it reliably detected individual fallen logs. In terms of the evaluation of accuracy of detecting fallen logs in different densities, the statistical analysis of accuracy metrics (Table 1), and particularly the results from the commission error, revealed that the amount of error more likely depends on the tree positions/formation (e.g., piled logs, logs which are very close to each other, etc.), rather than the actual number of fallen logs within each plot, which is consistent with the results of Lindberg et al. (2013). Also, if we consider Fig. 7a, we can clearly see that Plot 4, with five fallen logs, had the same amount of error as Plot 1 (Fig. 5a), which had 22 fallen logs. Similarly, Plot 3 (Fig. 6) had 41 fallen logs and Plot 5 (Fig. 8) had 10 fallen logs. The proposed technique can be a feasible extraction solution because it is capable of producing low commission errors (< 13%) across a relatively broad range of tree heights and sizes, similar to the findings of Duan et al. 2017. The Kappa statistic (0.44) suggested Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 8 n = 136 Predicted: Yes Predicted: No Total Actual: Yes 86 24 110 Actual: No 6 20 26 Total 92 44 136 moderate agreement between the predicted and the actual values. Finally, there was no need to proceed in any radiometric correction of the images because (a) of the low UAV flight path altitude, (b) all sample plots were located on flat areas, and (c) there was almost negligible shadowing effects. However, special caution should be taken in similar studies that use optical cameras deployed on UAVs due to the angular variation of reflection from objects. Conclusion The main contribution of our study is the development and demonstration of the Hough transformation algorithm for individual windthrown tree detection. Based on our findings, we propose that this detection technique is more feasible in cases where: (a) the percentage of canopy cover is small (open canopy stands); preferably clear-cut areas and silvopastoral systems, where the percentage of log visibility is high; and (b) log form is straight (i.e. pure cylindrical or conical forms). Although there were a few limitations, for example difficulties to obtain permission for access into the plots in order to collect field measurements (e.g., log diameters), which could help us better identify diameter limits that produced log incompleteness, we showed that the proposed methodology has high reliability for the detection of fallen logs based on total user‘s accuracy (94.9%), and the Kappa statistic suggested that there was good agreement between the observed and predicted values. Moreover, the cross-validation analysis denoted the efficiency of the proposed method with an average error of 16%. Although the results showed good relation between the two approaches (logs detected by the algorithm versus manual approach), further research is needed to refine the accuracy of the proposed method. The largest noticeable problem in using the Hough algorithm in our study was the influence of isolated pixels, which were solvable using particular shape filtering functions in some cases. Additional files FIGURE S1: Orthophotos of the study site. TABLE S1: All the parameters used for the detection of linear patterns during image pre-processing. Ethics approval Not applicable. Consent for publication Not applicable. Availability of data Please contact the primary author for further information. Competing interests The authors declare that they have no competing interests. Funding We acknowledge that this work was financially supported by (a) the project EVA4.0 [Grant No. CZ.02.1. 01/0.0/0.0/16_019/0000803] of the Faculty of Forestry and Wood Sciences (FFWS) from the Czech University of Life Sciences (CULS) in Prague; (b) the Ministry of Agriculture of Czech Republic [Grant No. QJ1520187]. Author’s contributions DP; AA; KK are currently researchers at the Department of Forest Management in the Czech University of Life Sciences (CULS) in Prague, Czech Republic. Graduate of the Forest Sciences, Ph.D. program, Department of Forest Management, Faculty of Forestry and Wood Sciences, CULS. PS is an Associate Professor in Forest Management with specification in Remote Sensing and Head of the Department of Forest Management of the Faculty of Forestry and Wood Sciences in CULS Prague, Czech Republic. Acknowledgements We gratefully acknowledge project EVA4.0 [Grant No. CZ.02.1.01/0.0/0.0/16_019/0000803] of the Faculty of Forestry and Wood Sciences (FFWS) from the Czech University of Life Sciences (CULS) in Prague and the Ministry of Agriculture of Czech Republic [Grant No. QJ1520187] for financial support to this study. References Abdollahnejad, A., Panagiotidis, D., Surový, P., & Ulbrichová, I. (2018). UAV capability to detect and interpret solar radiation as a potential replacement method to hemispherical photography. Remote Sensing, 10(3), 423. Antolovic, D. (2008). Review of the Hough transform method, with an implementation of the fast Hough variant for line detection. Indiana University. Bloomington, USA: Department of Computer Science. Aschoff, T., & Spiecker, H. (2004). Algorithms for the automatic detection of trees in laser scanner data. In: Proceedings of the ISPRS working group VIII/2 “Laser-Scanners for forest and Landscape assessment”, Freiburg, Germany, (pp. 71-75). Bailly, J.S., Lagacherie, P., Millier, C., Puech, C., & Kosuth, P. (2008). Agrarian landscapes linear features detection from LiDAR: Application to artificial drainage networks. International Journal of Remote Sensing, 29, 3489–3508. Blanchard, S.D., Jakubowski, M.K., & Kelly, M. (2011). Object-based image analysis of downed logs in disturbed forested landscapes using LiDAR. Remote Sensing, 3(11), 2420–2439. Böhm, P. (1981). Sturmschäden in Schwaben von 1950 bis 1980. Allgemeine Forst und Jagdzeitung, 36, 1380. Bütler, R., & Schlaepfer, R. (2004). Spruce snag quantification by coupling colour infrared aerial photos and a GIS. Forest Ecology and Management, 195, 325–339. Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 9 Canny, J.F. (1986). A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6), 679-698. Duan, F., Wan, Y., & Deng, L. (2017). A novel approach for coarse-to-fine windthrown tree extraction based on unmanned aerial vehicle images. Remote Sensing, 9(4), 306. Duda, R.O., & Hart, P.E. (1972). Use of the Hough Transformation to detect lines and curves in pictures. Communications of the ACM, 15(1), 11-15. Elatawneh, A., Wallner, A., Manakos, I., Schneider, T., & Knoke, T. (2014). Forest cover database updates using multi-seasonal RapidEye data—storm event assessment in the Bavarian Forest National Park. Forests, 5, 1284-1303. Feduck, C., McDermid, G.J., & Castilla, G. (2018). Detection of coniferous seedlings in UAV imagery. Forests 9(7): 432. Fransson, J. E. S., Magnusson, M., Folkesson, K., & Hallberg, B. (2007). Mapping of wind-thrown forests using VHF/UHF SAR images. In: Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, (pp. 2350–2353). Harmon, M.E., Franklin, J.F., Swanson, F.J., Sollins, P., & Gregory, S.V. (1986). Ecology of coarse woody debris in temperate ecosystems. Advances in Ecological Research, 15, 133–302. Hough, P.V.C. (1962). Method and means for recognizing complex patterns. U.S. Patent 3,069,654. Hu, X., & Yuan, Y. (2016). Deep-Learning-Based Classification for DTM Extraction from ALS Point Cloud. Remote Sensing, 8(9): 730. Inoue, T., Nagai, S., Yamashita, S., Fadaei, H., Ishii, R., Okabe, K., Taki, H., Honda, Y., Kajiwara, K., & Suzuki, R. (2014). Unmanned aerial survey of fallen trees in a deciduous broadleaved forest in eastern Japan. PLoS ONE, 9(10), e109881. Jones, C. B., & Purves, R. S. (2008). Geographical information retrieval. International Journal of Geographical Information Science, 22(3), 219–228. Klimo, E., Hager, H., & Kulhavý, J. (Eds.) (2000). Spruce monocultures in Central Europe–problems and prospects. European Forest Institute Proceedings, 33, 1–208. Kuuluvainen, T. (1994). Gap disturbance, ground microtopography, and the regeneration dynamics of boreal coniferous forests in Finland: A review. Annales Zoologici Fennici, 31(1), 35–51. Lindberg, E., Hollaus, M., Mücke, W., Fransson, J.E.S., & Pfeifer, N. (2013). Detection of lying tree stems from airborne laser scanning data using a line template matching algorithm. ISPRS Annals of the Photogrammetry Remote Sensing and Spatial Information Sciences, II-5/W2, 169–174. Liu, W., Zhang, Z., Li, S., & Tao, D. (2017). "Road detection by using a generalized Hough transform", Remote Sensing, 9(6): 590. Maguya, A S., Junttila, V., & Kauranne, T. (2014). Algorithm for extracting digital terrain models under forest canopy from airborne LiDAR data. Remote Sensing, 6(7), 6524–6548. Mokroš, M., Výbošťok, J., Merganič, J., Hollaus, M., Barton, I., Koreň, M., Tomaštík, J., & Čerňava, J. (2017). Early stage forest windthrow estimation based on Unmanned Aircraft System imagery. Forests, 8(9): 306. Mücke, W., Deak, B., Schroi, A., Hollaus, M., & Pfeifer, N. (2013). Detection of fallen trees in forested areas using small footprint airborne laser scanning data. Canadian Journal of Remote Sensing, 39, S32–S40. Mukhopadhyay, P., & Chaudhuri, B.B. (2015). A survey of Hough Transform. Pattern Recognition, 48(3), 993–1010. Niemi, M.T., & Vauhkonen, J. (2016). Extracting canopy surface texture from airborne laser scanning data for the supervised and unsupervised prediction of area-based forest characteristics. Remote Sensing, 8(7), 582. Nyström, M., Holmgren, J., Fransson, J.E., & Olsson, H. (2014). Detection of windthrown trees using airborne laser scanning. International Journal of Applied Earth Observation, 30, 21–29. Olofsson, K., Holmgren, J., & Olsson, H. (2014). Tree stem and height measurements using Terrestrial Laser Scanning and the RANSAC algorithm. Remote Sensing, 6(5), 4323–4344. Panagiotidis, D., Abdollahnejad, A., Surový, P., & Chiteculo, V. (2017). Determining tree height and crown diameter from high-resolution UAV imagery. International Journal of Remote Sensing, 38(8-10), 2392–2410. Pasher, J., & King, D.J. (2009). Mapping dead wood distribution in a temperate hardwood forest using high resolution airborne imagery. Forest Ecology and Management, 258, 1536–1548. Puche, J. (2003). Growth and development of the root system of Norway spruce (Picea abies) in forest stands – a review. Forest Ecology and Management, 175, 253–273. Puliti, S., Talbot, B., & Astrup, R. (2018). Tree-stump detection, segmentation, classification, and measurement using Unmanned Aerial Vehicle (UAV) imagery. Forests, 9(3): 102. Rangel, J.M.G., Gonçalves, G.R., & Pérez, J.A. (2018). The impact of number and spatial distribution of GCPs on the positional accuracy of geospatial products derived from low-cost UASs. International Journal of Remote Sensing, 39, 7154-7171. Rondeux, J., & Sanchez, C. (2010). Review of indicators and field methods for monitoring biodiversity within national forest inventories. Core variable: Deadwood. Environmental Monitoring and Assessment, 164, 617–630. Seidl, R., Thom, D., Kautz, M., Martin-Benito, D., Peltoniemi, M., Vacchiano, G., Wild, J., Ascoli, D., Petr, M., & Honkaniemi, J. (2017). Forest disturbances under climate change. Nature Climate Change, 7, 395–402. Ståhl, G., Ringvall, A., & Fridman, J. (2001). Assessment of coarse woody debris: A methodological overview. Ecological Bulletins, 49, 57–70. Surový, P., Ribeiro, N. A., & Panagiotidis, D. (2018). Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 10 Estimation of positions and heights from UAV-sensed imagery in tree plantations in agrosilvopastoral systems. International Journal of Remote Sensing, 39(14), 4786–4800. Szantoi, Z., Malone, S., Escobedo, F., Misas, O., Smith, S., & Dewitt, B. (2012). A tool for rapid post-hurricane urban tree debris estimates using high resolution aerial imagery. International Journal of Applied Earth Observation and Geoinformation, 18, 548– 556. Tittmann, P., Shafii, S., Hartsough, B., & Hamann, B. (2011). Tree detection and delineation from LiDAR point clouds using RANSAC. In: Proceedings of SilviLaser 2011 – 11th International Conference on LiDAR Applications for Assessing Forest Ecosystems, Hobart, Australia, (pp. 583- 595). Tomaštík, J.; Mokroš, M.; Saloň, Š.; Chudý, F.; Tunák, D. (2017). Accuracy of photogrammetric UAV-based point clouds under conditions of partially-open forest canopy. Forests, 8(5): 151. Tran, T.H.G., Hollaus, M., Nguyen, B.D., & Pfeifer, N. (2015). Assessment of wooded area reduction by airborne laser scanning. Forests, 6(5), 1613–1627. ÚHÚL (2007). National Forest Inventory in the Czech Republic 2001–2004: Introduction, Methods, Results. Brandýs nad Labem, Czech Republic 224. Wang, W., Qu, J.J., Hao, X., Liu, Y., & Stanturf, J.A. (2010). Post-hurricane forest damage assessment using satellite remote sensing. Agriculture and Forest Meteorology, 150, 122–132. Ye, H., Shang, G., Wang, L., Zheng, M. (2015). A new method based on Hough transform for quick line and circle detection. In: Proceedings of the International Conference on Biomedical Engineering and Informatics, Shenyang, China, (pp. 52–56). Ziou, D., & Tabbone, S. (1998). Edge detection techniques—An overview. International Journal of Pattern Recognition and Image Analysis, 8, 537– 559. List of abbreviations UAVs: Unmanned aerial vehicles; NFI: Czech national forest inventory; SAR: Synthetic aperture radar; LiDAR: Light detection and ranging; DTM: Digital terrain model; ALS: Airborne laser scanner; OHM: Object height model; RANSAC: Random sample consensus; SfM: Structure from motion; CD: Correct detected; MVA: Manual visual assessment; DFL: Detected as fallen log; CV: Cross- validation; DBH: Diameter at breast height. Panagiotidis et al. New Zealand Journal of Forestry Science (2019) 49:2 Page 11 New Zealand Journal of Forestry Science (2019) 49:2 https://doi.org/10.33494/nzjfs492019x26x Additional File 1 Detection of Fallen Logs from High-Resolution UAV Images D. Panagiotidis*, Azadeh Abdollahnejad, Peter Surový, Karel Kuželka Figure S1: Orthophotos of the study site. New Zealand Journal of Forestry Science (2019) 49:2 https://doi.org/10.33494/nzjfs492019x26x 1 Additional File 2 Detection of Fallen Logs from High-Resolution UAV Images D. Panagiotidis*, Azadeh Abdollahnejad, Peter Surový, Karel Kuželka Table S1: All the parameters used for the detection of linear patterns during image pre-processing. Morphological operator Parameters aLagmatrix (n) bBwareaopen (n) cBwareafilt (n) dStrel - R Imdilate Imclose Plot 1 3 120 30 - - Plot 2 3 120 30 Disk - 12 Disk - 4 Plot 3 3 130 55 - - Plot 4 3 120 10 - - Plot 5 7 120 28 Disk - 10 Disk - 3 Plot 6 3 140 50 Disk - 1 - a Value n stands for the number of lags b Remove objects containing fewer than n pixels c Retaining only the n objects with the largest areas d Strel is an essential part of morphological dilation and erosion operations, it refers to the creation of a disk- shaped structuring element, whereas R specifies the distance from the structuring element origin to the points of the disk