Microsoft Word - B_26_R.doc HUNGARIAN JOURNAL OF INDUSTRIAL CHEMISTRY VESZPRÉM Vol. 38(2). pp. 187-191 (2010) AUTONOMOUS MAPPING IN POLLUTED ENVIRONMENTS I. SZŐKE , A. MAJDIK, D. LUPEA, L. TAMÁS, GH. LAZEA Department of Automation, Technical University of Cluj-Napoca, C. Daicoviciu 15 Cluj-Napoca, ROMANIA E-mail: istvan.szoke@aut.utcluj.ro The need of ecological robots is imposed by the nowadays trend towards protecting the environment and the human factor. New researches in this area have arisen in the past years. In the next few lines an overview of the state of the art in this domain will be made, proposals go from robots designed to repair power line damage to robots for capturing photos of animals in the wild without human intervention. The main objective in this paper is to demonstrate the autonomous navigation of a mobile robot in a polluted environment. So far the research on robots used for detection of gas sources was focused mainly on indoor environments, but, in the recent years, a constant move towards the outdoor environment can be observed. There can be found several researches with different applications in the area of ecological robots. Keywords: autonomous mobile robots, mapping, navigation, polluted enviromnets. Introduction There are many kinds of environmental robots which have various usage in order to detect failures in proximity of the robot excluding the human being from a possible dangerous situation. Describing a few prototype examples would demonstrate the utility of the research in this field. A first example would be a robot that is able to detect defects on power lines. The device uses rollers to clamp onto and move along a power line. The robot will do image analysis to see if there is something different with the structure compared to an earlier picture taken from the exact same spot, being able to remotely spot high-risk trees, which are the top cause of electrical outages. The prototype robot will also make sure there are no faulty connections that can cause overheating, and listen for electromagnetic "noise" that might indicate other problems with the equipment. Figure 1: Electric Power Research Institute prototype Another proposal is the Traffic regulating robots powered by solar energy. Qatar’s Public Works Authority with support from the Qatar Science Club will soon be developing new robots that will replace the manual methods used in guiding traffic at varies road project sites. The green robots will be powered by solar energy harvested by onboard solar panels. The robots will control traffic at major ongoing road projects in the main cities. The US scientists developed a solar-powered prototype and tested it in Greenland. These are the next generation of Antarctic explorer robots capable of driving hundreds of kilometers and doing scientific experiments alone. These could perform scientific experiments where access is currently difficult or expensive. There are two basic types of mission scenario one would be to stop the robot on the driving trajectory and take the data you need - things like sampling for bacteria in the snow, measuring the atmosphere, or doing a glaciological survey with ground-penetrating radar [3]. Another environmental robot, the DustCart , was developed by numerous european companies and universities, who aims to be a collecting and segregating garbage robot. What’s more, it can go door to door playing garbage man and can be summoned by a cell phone. To top it all off, it can also act as a mobile pollution monitor by monitoring atmospheric pollution levels. Figure 2: The DustCart 188 Research on using chemical sensors in robotic systems started in early nineties focusing on the development of two typologies of robots: the first typology is able to follow odor trails marked on the ground as ants follow a pheromone trail [1]; the second one is able to track odor trails formed in fluid media to find a chemical source [20, 9, 2]. Nowadays this research field of finding places where chemical sources are present or in the air or on the ground surface or underground, has been widely spreaded. Other robots for monitoring gas pollution in areas of interest have been proposed in literature. There are robots designed to monitor ammonia pollution near animal farms [11], robots designed to go into old mines and detect dangers of gas intoxication and so on. In case of the underground search finding unexploded ordinance, land mines, and sources of leaks from pipes and tanks are some examples. Our project proposal is related to a mine exploration situation, where an autonomous robot is sent to explore the area in order to detect places where the air contains more chemical components than the normal values. The robot is creating the map of the environment, which is explored at exact intervallums. In case of pollution detection the robot is sending a warning signal to the server by wireless network. The warning message contains the position of the robot and the quantity of the pollution. Using all these informations the rescue team can be sent to avoid more tragic events, explosions which can harm the mine workers. In Fig. 3 can be seen a mine scene together with the Pioneer robot, a four wheeled mobile robot which is used in our research project. Figure 3: Mine scene with the P3-AT robot In the following sections the theories of the visual and laser system is described. These two systems are combined at different steps of the robots behaviour like the mapping, localization stages. Theoretical background of the visual system Building feature based maps In Fig. 4 is shown the diagram of the visual feature based mapping algorithm. The visual data and the laser scans need to be taken synchronously. After the data acquisition is done from the displacement of the consecutive laser scans the robots displacement (translation and rotation) is computed with a standard Iterative Closest Point algorithm [22] between two consecutive scans. Because the test environment is relatively small a more complex scan registration method is not needed like in [14], where the problem of mapping a large mine is addressed. Figure 4: Diagram of the mapping algorithm In the mean time another task is to detect the image features on both of the stereo image pair synchronously. Our algorithm uses image features detector and descriptor called SURF. SURF is an abbreviation from Speeded Up Robust Features; it is a method, known in the field of computer vision to detect interest points on images. SURF was first introduced by [7], the advantage of this algorithm is the computational speed compared with other performing image feature detectors like Scale- Invariant Feature Transform (SIFT) presented by [17], maintaining approximately the same or even better performances regarding: repeatability, distinctiveness and robustness. In [22] is concluded that SURF is one of the most suitable descriptor for visual simultaneous localization and mapping applications. The algorithm and the performances are largely discussed in [6] and [16], whereas hereby only a short overview is presented. The main reason why the algorithm decreases the computational time is because SURF uses integral images [16] as intermediate representation, in this way the sum of intensities over any upright, rectangular region is calculated with only four additions. The SURF algorithm is based on the Fast-Hessian detector, which computes the determinant of the Hessian matrix: ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ σσ σσ = ),(),( ),(),( ),( yyxy xyxx pp pp σp LL LL H (1) where with Lxx(p,σ) is denoted the convolution of the second order Gaussian derivative 2 2 )( x σg ∂ ∂ of the image at point p(x,y) and scale σ in the x direction. Lxy(p,σ) and Lyy(p,σ) are calculated similarly, these derivatives are known as Laplacian of Gaussian. Another reason why the SURF algorithm performs fast is that it approximates the Laplacian of Gaussian with a box filter representation. The box filter allows a performance increase in time when they are computed on integral images The features must be detected on both images of the stereo pair to calculate the local 3D feature map using the stereo geometry of the stereo camera system. In every robotic application it is desirable to limit the amount of 189 data to decrease computational time as much as possible. For this reason the prior map shouldn’t be overpopulate, but in the mean time the distinctiveness of the data is important. Therefore the 3D image features that are already in the global map or the similar ones to these are filtered out based on the SURF descriptor. In the final step the local feature point cloud coordinate system is updated with the global position of the robot computed by the laser scan registration algorithm and the new distinctive features are recorded in the global 3D feature map. Theoretical background of the laser system Gaussian mixture models are widely used in data mining, pattern recognition, machine learning and statistical analysis. In order to build complex probability distributions, mixture models can also be used to cluster data. The problem of finding clusters in a set of data points can be resolved by a non-probabilistic technique, the K-means algorithm. K-means Clustering The main usage of the method created by [15] is to identify groups, or clusters of data points in a multidimensional space. The inputs of the method are a data set {x1, ..., xn} consisting of N observations of a random D-dimensional Euclidean variable x, and k the number of clusters which we would like to obtain. In common words a cluster can be described as comprising a group of data points whose distances between the points are less than the distances between the points outside from the cluster. We note μk as the representation of the centers of the clusters. Finding an optimal solution to the data set clustering problem is not straightforward. A relative good solution has been found when the sum of the squares of the distances of each data point to its closest center is a minimum. For each data point xn is introduced a corresponding set of binary indicator variables rnk ∈ {0,1}, where k = 1, ..., K describing which of the K clusters the data point xn is assigned to. Each point can be assigned just to one cluster, this can be described as follows if rnk = 1 ∧ rnk = 0, then j ≠ k, this is called the 1-of-K coding scheme. ∑∑ == μ−= K 1k 2 knnk N 1n xrJ (2) The objective function (9) represents the sum of the squares of the distances of each data point to its assigned vector μk. The solution is optimal when the values found for rnk and μk are minimizing J. These values can be obtained developing an iterative procedure, it involves two successive steps in which rnk and μk are optimized. In the beginning initial values are chosen for μk, J is minimized with respect to rnk, keeping the μk fixed. In the following step J is minimized with respect to μk, rnk is kept fixed. These two stages correspond to E(expectation) and M(maximization) steps of the EM algorithm, which steps are repeated until convergence [12]. Expectation-Maximization algorithm The goal of the EM algorithm is to find maximum likelihood solutions for models having latent variables [9]. Table 1: Description of the EM algorithm Experimental results In order to achieve an autonomous navigation for the P3-AT in a mine scene, first we tested our algorithms in an indoor environment, which is the office building where the research is done. The 2D laser map and the map built using the images taken by the stereo camera are well combined. The feature based map is built iteratively with the presented algorithm above by navigating the robot in the test environment which is an indoor office building. A selected image sequence can be seen on Fig. 5 from the 155 stereo pair image sequences that were taken in the office, corridor and staircase of our laboratory. Figure 5: Selected images from the data set The obtained 3D feature map is plotted on Fig. 6, where with black spheres are represented the 3D feature landmarks obtained in the whole test environment, divided in the three main rooms. 1. Chose initial values for Θold 2. E step Evaluate ),|( oldXZp Θ 3. M step Evaluate Θnew, where ),(maxarg oldnew ΘΘΦΘ Θ = we know that, ∑= Z oldold ZXpXZp ).|,(ln),|(),( ΘΘΘΘΦ 4. Check the convergence of the parameter values, if the convergence is not achieved then Θold = Θ new and return to step 2. 190 Figure 6: The obtained 3D feature based map of the environment With the red disk are denoted the positions of the robot, where the stereo image pares were taken. The origin of the map, the first robot location is the dark red one. The fade color of the disks shows the path travelled by the robot. The point clouds acquired by the 2D laser system can be computed for different usage in the navigation process. The first objective would be to create an apriori static map, which is done using the above described algorithms, k-means and EM algorithm. In Fig. 7 the map of the office environment can be seen, which will be clustered in equal rectangles. The objective is to clusterize all the separate objects and to detect the dynamic ones. Figure 7: 2D laser map of the office In Fig. 8 the map is separate in rectangles, for each rectangle the same algorithms are applied. Figure 8: Clustered map of the office In each rectangle the data set is clustered in separate parts by both of the algorithms in order to detect the dynamic ones. In Fig. 9 from the two separations the one with the most clear one is chosen. All the separate data set will be encontoured by geometrical shapes, like ellipsoids and different contours. Figure 9: The separated data set by K-means and EM algorithm After all these steps will be made to each rectangles, the objective is to put together the data points which are part from the same object. The data set which will change its original cluster will be included in a new geometrical form. These newly created forms will be put in cells, of which size will be deducted by the size of the object. Using the Markovian properties an optimal trajectory will be find and in collaboration with the vision system the localization process will work more efficiently. Future work Our future objective is to develop the visual system to be capable of building large scale image feature based maps. To accomplish this goal in an efficient data management manner the bag of words representations will be used. The laser system will be combined with the Markovian properties. Another interest of our research is focused on the possibility of putting the visual mapping system in a probabilistic framework to be able to perform simultaneous localization and mapping. REFERENCES 1. G. FERRI, A. MONDINI, A. MANZI, B. MAZZOLAI, C. LASCHI, V. MATTOLI, M. REGGENTE, T. STOYANOV, A. LILIENTHAL, M. LETTERE, P. DARIO: DustCart, a mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios, ICRA Workshop, 2010. 2. G. FERRI, E. CASELLI, V. MATTOLI, A. MONDINI, B. MAZZOLAI, P. DARIO: SPIRAL: a Novel Biologically- Inspired Algorithm for Gas/Odor Source Localization in an Indoor Environment with no Strong Airflow, Robotics and Autonomous Systems, Issue 4, April, 2009. 3. M. K. MUEZZINOGLU, A. VERGARA, R. HUERTA, N. RULKOV, M. I. RABINOVICH, A. SELVERSTON, H. D. ABARBANEL: Acceleration of chemosensory information processing using transient features, Sensors and Actuators B: Chemical, 2009. 191 4. C. STACHNISS, C. PLAGEMANN, A. LILIENTHAL: Learning gas distribution models using sparse Gaussian process mixtures, Autonomous Robots, 2009. 5. A. GIL, et al.: A comparative evaluation of interest point detectors and local descriptors for visual SLAM, Machine Vision and Applications, Springer, 2009. 6. C. EVANS: Notes on the OpenSURF library, University of Bristol, Technical Report: CSTR-09- 00, 2009. 7. H. BAY, A. ESS, T. TUYTELAARS, A. VAN GOOL: SURF: Speeded up robust feature, Computer Vision and Image Understanding, 110, 2008, 346–359. 8. G. FERRI, M. V. JAKUBA, E. CASELLI YOERGER, P. DARIO: Localizing multiple gas/odor sources in an indoor environment using Bayesian occupancy grid mapping, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007. 9. C. LAUGIER, R. CHATILA: Autonomous Navigation in Dynamic Environments, Springer tracts in Advanced Robotics, 35, 2007. 10. A. LILIENTHAL, A. LOUTFI, T. DUCKETT: Airborne chemical sensing with mobile robots, Sensors, 2006. 11. D. MARTINEZ, O. ROCHEL, E. HUGHES: A biomimetic robot for tracking specific odors in turbulent plumes, Autonomous Robots, June 2006. 12. C. M. BISHOP: Pattern Recognition and Machine Learning. Information Science and Statistics, 2006. 13. H. R. OLESEN, R. BERKOWICZ, M. KETZEL: Regulatory Odour Model Development: Survey of Modelling Tools and Datasets with Focus on Building Effects, Technical Report 541, Denmark 2005. 14. L. MARQUES, A. MARTINS, A. DE ALMEIDA: Environmental monitoring with mobile robots, Conference on Intelligent Robots and Systems, IROS 2005. 15. J. A. FARRELL, S. PANG, W. LI: Plume Mapping using Markov methods, IEEE Transactions on Systems, Man and Cybernetics Part B: Cybernetics, 2003. 16. S. THRUN et al.: A System for Volumetric Robotic Mapping of Abandoned Mines, IEEE International Conference on Robotics and Automation, 2003, 4270–4275. 17. A. T. HAYES, A. MARTINOLI, R. M. GOODMAN: Distributed odor source localization, IEEE Sensors Journal, 2, 2002. 18. P. VIOLA, M. JONES: Rapid object detection using a boosted cascade of simple features, Computer Vision and Pattern Recognition, 2001. 19. D. G. LOWE: Object recognition from local scale- invariant features, International Conference on Computer Vision, 1999, 1150–1157. 20. H. ISHIDA, T. NAKAMOTO, T. MORIIZUMI: Remote sensing of gas/odor source location and concentration distribution using mobile systems, Sensors and Actuators B 1998. 21. R. A. RUSSELL: Laying and sensing odor markings as a strategy for assisting mobile robot navigation tasks, IEEE ROBOT. MAG., 2, 1995. 22. H. ISHIDA, K. SUETSUGU, T. NAKAMOTO, T. MORIIZUMI: Study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors, 45, 1994. 23. P. J. BESL, N. D. MCKAY: A Method for Registration of 3-D Shapes, IEEE Transactions on Pattern Analysis & Machine Intelligence, 14, 1992, 239–256. << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /CMYK /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments true /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA /BGR /CHS /CHT /CZE /DAN /DEU /ESP /ETI /FRA /GRE /HEB /HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN /ITA /JPN /KOR /LTH /LVI /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR /POL /PTB /RUM /RUS /SKY /SLV /SUO /SVE /TUR /UKR /ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice